I0622 21:09:01.697926 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0622 21:09:01.698289 6 e2e.go:109] Starting e2e run "6426321e-d338-493d-9440-c43ed0d034a7" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1592860140 - Will randomize all specs Will run 278 of 4842 specs Jun 22 21:09:01.761: INFO: >>> kubeConfig: /root/.kube/config Jun 22 21:09:01.763: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 22 21:09:01.784: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 22 21:09:01.826: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 22 21:09:01.826: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 22 21:09:01.826: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 22 21:09:01.841: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 22 21:09:01.841: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 22 21:09:01.841: INFO: e2e test version: v1.17.4 Jun 22 21:09:01.842: INFO: kube-apiserver version: v1.17.2 Jun 22 21:09:01.842: INFO: >>> kubeConfig: /root/.kube/config Jun 22 21:09:01.848: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:09:01.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Jun 22 21:09:01.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5130.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5130.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:09:07.985: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:07.988: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:07.990: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:07.992: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:08.002: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:08.005: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:08.008: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:08.011: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:08.017: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:13.022: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.025: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.028: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.043: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.046: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.049: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.051: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:13.056: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:18.022: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.026: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.029: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.041: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.043: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.047: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:18.055: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:23.022: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.026: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.030: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.034: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.042: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.044: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.047: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:23.056: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:28.025: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.029: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.032: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.034: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.040: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.042: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.045: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.047: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:28.053: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:33.022: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.026: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.029: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.033: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.047: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.049: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.051: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.054: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local from pod dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685: the server could not find the requested resource (get pods dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685) Jun 22 21:09:33.059: INFO: Lookups using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5130.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5130.svc.cluster.local jessie_udp@dns-test-service-2.dns-5130.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5130.svc.cluster.local] Jun 22 21:09:38.058: INFO: DNS probes using dns-5130/dns-test-33c28379-5c6a-4f98-b9d4-023caee1f685 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:09:38.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5130" for this suite. • [SLOW TEST:36.371 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:09:38.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 22 21:09:45.410: INFO: Successfully updated pod "annotationupdatec478124b-47e1-4b1b-8c6a-245724eeb91c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:09:47.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1182" for this suite. • [SLOW TEST:9.268 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:09:47.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-8bf89b50-83a0-4435-89b0-80f82a82e20a STEP: Creating a pod to test consume configMaps Jun 22 21:09:47.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964" in namespace "configmap-31" to be "success or failure" Jun 22 21:09:47.599: INFO: Pod "pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400341ms Jun 22 21:09:49.603: INFO: Pod "pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007659526s Jun 22 21:09:51.606: INFO: Pod "pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011322998s STEP: Saw pod success Jun 22 21:09:51.606: INFO: Pod "pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964" satisfied condition "success or failure" Jun 22 21:09:51.632: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964 container configmap-volume-test: STEP: delete the pod Jun 22 21:09:51.667: INFO: Waiting for pod pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964 to disappear Jun 22 21:09:51.682: INFO: Pod pod-configmaps-f101bfcb-89d0-4ab4-9475-6825be2a7964 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:09:51.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-31" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":69,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:09:51.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:09:51.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12" in namespace "projected-416" to be "success or failure" Jun 22 21:09:51.780: INFO: Pod "downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12": Phase="Pending", Reason="", readiness=false. Elapsed: 9.443619ms Jun 22 21:09:53.794: INFO: Pod "downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023897426s Jun 22 21:09:55.799: INFO: Pod "downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028540757s STEP: Saw pod success Jun 22 21:09:55.799: INFO: Pod "downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12" satisfied condition "success or failure" Jun 22 21:09:55.803: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12 container client-container: STEP: delete the pod Jun 22 21:09:55.960: INFO: Waiting for pod downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12 to disappear Jun 22 21:09:55.978: INFO: Pod downwardapi-volume-f90788ef-da72-4753-8e2a-66b63ec77a12 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:09:55.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-416" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":78,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:09:55.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 22 21:09:56.704: INFO: Pod name wrapped-volume-race-189ca35a-33eb-4ffa-b91a-170851d8599f: Found 0 pods out of 5 Jun 22 21:10:01.713: INFO: Pod name wrapped-volume-race-189ca35a-33eb-4ffa-b91a-170851d8599f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-189ca35a-33eb-4ffa-b91a-170851d8599f in namespace emptydir-wrapper-7308, will wait for the garbage collector to delete the pods Jun 22 21:10:13.803: INFO: Deleting ReplicationController wrapped-volume-race-189ca35a-33eb-4ffa-b91a-170851d8599f took: 7.539874ms Jun 22 21:10:14.103: INFO: Terminating ReplicationController wrapped-volume-race-189ca35a-33eb-4ffa-b91a-170851d8599f pods took: 300.255207ms STEP: Creating RC which spawns configmap-volume pods Jun 22 21:10:29.958: INFO: Pod name wrapped-volume-race-73c9f203-9252-4f3e-8988-6c7b5105e040: Found 0 pods out of 5 Jun 22 21:10:34.964: INFO: Pod name wrapped-volume-race-73c9f203-9252-4f3e-8988-6c7b5105e040: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-73c9f203-9252-4f3e-8988-6c7b5105e040 in namespace emptydir-wrapper-7308, will wait for the garbage collector to delete the pods Jun 22 21:10:49.055: INFO: Deleting ReplicationController wrapped-volume-race-73c9f203-9252-4f3e-8988-6c7b5105e040 took: 14.4895ms Jun 22 21:10:49.355: INFO: Terminating ReplicationController wrapped-volume-race-73c9f203-9252-4f3e-8988-6c7b5105e040 pods took: 300.300512ms STEP: Creating RC which spawns configmap-volume pods Jun 22 21:11:00.591: INFO: Pod name wrapped-volume-race-7d0c6945-5197-4e0b-b502-e18fbd237625: Found 0 pods out of 5 Jun 22 21:11:05.598: INFO: Pod name wrapped-volume-race-7d0c6945-5197-4e0b-b502-e18fbd237625: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d0c6945-5197-4e0b-b502-e18fbd237625 in namespace emptydir-wrapper-7308, will wait for the garbage collector to delete the pods Jun 22 21:11:19.692: INFO: Deleting ReplicationController wrapped-volume-race-7d0c6945-5197-4e0b-b502-e18fbd237625 took: 7.469447ms Jun 22 21:11:19.995: INFO: Terminating ReplicationController wrapped-volume-race-7d0c6945-5197-4e0b-b502-e18fbd237625 pods took: 302.295527ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:11:31.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7308" for this suite. • [SLOW TEST:95.122 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":5,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:11:31.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 21:11:39.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 21:11:39.271: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 21:11:41.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 21:11:41.275: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 21:11:43.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 21:11:43.276: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 21:11:45.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 21:11:45.275: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:11:45.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4903" for this suite. • [SLOW TEST:14.195 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":123,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:11:45.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4505.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4505.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4505.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4505.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:11:53.434: INFO: DNS probes using dns-4505/dns-test-a8777088-72e0-4468-8630-999900ae3987 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:11:53.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4505" for this suite. • [SLOW TEST:8.244 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":7,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:11:53.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:00.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7983" for this suite. • [SLOW TEST:7.049 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":8,"skipped":165,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:00.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-6426/secret-test-ef963929-2e3b-44c8-bf9b-ca4cfa595281 STEP: Creating a pod to test consume secrets Jun 22 21:12:00.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d" in namespace "secrets-6426" to be "success or failure" Jun 22 21:12:00.880: INFO: Pod "pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.675251ms Jun 22 21:12:02.885: INFO: Pod "pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038732766s Jun 22 21:12:04.888: INFO: Pod "pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042323396s STEP: Saw pod success Jun 22 21:12:04.888: INFO: Pod "pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d" satisfied condition "success or failure" Jun 22 21:12:04.891: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d container env-test: STEP: delete the pod Jun 22 21:12:04.926: INFO: Waiting for pod pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d to disappear Jun 22 21:12:04.928: INFO: Pod pod-configmaps-d8bdcd83-58ed-45cb-a117-86b48a62061d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:04.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6426" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:04.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:12:05.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a" in namespace "downward-api-9174" to be "success or failure" Jun 22 21:12:05.037: INFO: Pod "downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.659887ms Jun 22 21:12:07.042: INFO: Pod "downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027420189s Jun 22 21:12:09.046: INFO: Pod "downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031548718s STEP: Saw pod success Jun 22 21:12:09.046: INFO: Pod "downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a" satisfied condition "success or failure" Jun 22 21:12:09.049: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a container client-container: STEP: delete the pod Jun 22 21:12:09.193: INFO: Waiting for pod downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a to disappear Jun 22 21:12:09.222: INFO: Pod downwardapi-volume-bfc92255-a0c9-447a-bc03-486ad963aa4a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:09.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9174" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:09.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 22 21:12:09.331: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:16.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5030" for this suite. • [SLOW TEST:7.567 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":11,"skipped":289,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:16.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4a7b0322-7c9d-417f-b008-311dc72cc429 STEP: Creating a pod to test consume configMaps Jun 22 21:12:16.937: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8" in namespace "projected-9808" to be "success or failure" Jun 22 21:12:16.940: INFO: Pod "pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.328904ms Jun 22 21:12:18.971: INFO: Pod "pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033389243s Jun 22 21:12:20.975: INFO: Pod "pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037652792s STEP: Saw pod success Jun 22 21:12:20.975: INFO: Pod "pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8" satisfied condition "success or failure" Jun 22 21:12:20.978: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8 container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:12:21.050: INFO: Waiting for pod pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8 to disappear Jun 22 21:12:21.113: INFO: Pod pod-projected-configmaps-ed7a7afd-8ee0-4a8c-a010-834b262eecf8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:21.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9808" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:21.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5098 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5098 STEP: creating replication controller externalsvc in namespace services-5098 I0622 21:12:21.468704 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5098, replica count: 2 I0622 21:12:24.519196 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:12:27.519404 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 22 21:12:27.557: INFO: Creating new exec pod Jun 22 21:12:31.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5098 execpoddw74k -- /bin/sh -x -c nslookup clusterip-service' Jun 22 21:12:34.346: INFO: stderr: "I0622 21:12:34.098082 27 log.go:172] (0xc000944b00) (0xc0003a3720) Create stream\nI0622 21:12:34.098142 27 log.go:172] (0xc000944b00) (0xc0003a3720) Stream added, broadcasting: 1\nI0622 21:12:34.101358 27 log.go:172] (0xc000944b00) Reply frame received for 1\nI0622 21:12:34.101392 27 log.go:172] (0xc000944b00) (0xc000718000) Create stream\nI0622 21:12:34.101400 27 log.go:172] (0xc000944b00) (0xc000718000) Stream added, broadcasting: 3\nI0622 21:12:34.102340 27 log.go:172] (0xc000944b00) Reply frame received for 3\nI0622 21:12:34.102388 27 log.go:172] (0xc000944b00) (0xc000736000) Create stream\nI0622 21:12:34.102412 27 log.go:172] (0xc000944b00) (0xc000736000) Stream added, broadcasting: 5\nI0622 21:12:34.103374 27 log.go:172] (0xc000944b00) Reply frame received for 5\nI0622 21:12:34.227043 27 log.go:172] (0xc000944b00) Data frame received for 5\nI0622 21:12:34.227083 27 log.go:172] (0xc000736000) (5) Data frame handling\nI0622 21:12:34.227112 27 log.go:172] (0xc000736000) (5) Data frame sent\n+ nslookup clusterip-service\nI0622 21:12:34.335929 27 log.go:172] (0xc000944b00) Data frame received for 3\nI0622 21:12:34.335952 27 log.go:172] (0xc000718000) (3) Data frame handling\nI0622 21:12:34.335963 27 log.go:172] (0xc000718000) (3) Data frame sent\nI0622 21:12:34.337059 27 log.go:172] (0xc000944b00) Data frame received for 3\nI0622 21:12:34.337072 27 log.go:172] (0xc000718000) (3) Data frame handling\nI0622 21:12:34.337082 27 log.go:172] (0xc000718000) (3) Data frame sent\nI0622 21:12:34.338121 27 log.go:172] (0xc000944b00) Data frame received for 5\nI0622 21:12:34.338148 27 log.go:172] (0xc000944b00) Data frame received for 3\nI0622 21:12:34.338171 27 log.go:172] (0xc000718000) (3) Data frame handling\nI0622 21:12:34.338185 27 log.go:172] (0xc000736000) (5) Data frame handling\nI0622 21:12:34.340324 27 log.go:172] (0xc000944b00) Data frame received for 1\nI0622 21:12:34.340351 27 log.go:172] (0xc0003a3720) (1) Data frame handling\nI0622 21:12:34.340363 27 log.go:172] (0xc0003a3720) (1) Data frame sent\nI0622 21:12:34.340384 27 log.go:172] (0xc000944b00) (0xc0003a3720) Stream removed, broadcasting: 1\nI0622 21:12:34.340410 27 log.go:172] (0xc000944b00) Go away received\nI0622 21:12:34.340744 27 log.go:172] (0xc000944b00) (0xc0003a3720) Stream removed, broadcasting: 1\nI0622 21:12:34.340756 27 log.go:172] (0xc000944b00) (0xc000718000) Stream removed, broadcasting: 3\nI0622 21:12:34.340761 27 log.go:172] (0xc000944b00) (0xc000736000) Stream removed, broadcasting: 5\n" Jun 22 21:12:34.346: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5098.svc.cluster.local\tcanonical name = externalsvc.services-5098.svc.cluster.local.\nName:\texternalsvc.services-5098.svc.cluster.local\nAddress: 10.100.116.207\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5098, will wait for the garbage collector to delete the pods Jun 22 21:12:34.406: INFO: Deleting ReplicationController externalsvc took: 6.308593ms Jun 22 21:12:34.706: INFO: Terminating ReplicationController externalsvc pods took: 300.258104ms Jun 22 21:12:49.547: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:49.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5098" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.475 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":13,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:49.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:12:49.972: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:12:51.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:12:55.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:12:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7428" for this suite. STEP: Destroying namespace "webhook-7428-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":14,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:12:55.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0622 21:13:35.975421 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 21:13:35.975: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:13:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9657" for this suite. • [SLOW TEST:40.568 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":15,"skipped":377,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:13:35.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:13:36.114: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4441 I0622 21:13:36.130096 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4441, replica count: 1 I0622 21:13:37.180478 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:13:38.180688 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:13:39.180898 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:13:40.181149 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 21:13:40.309: INFO: Created: latency-svc-mlhrn Jun 22 21:13:40.323: INFO: Got endpoints: latency-svc-mlhrn [41.686404ms] Jun 22 21:13:40.374: INFO: Created: latency-svc-9t26x Jun 22 21:13:40.387: INFO: Got endpoints: latency-svc-9t26x [64.680682ms] Jun 22 21:13:40.407: INFO: Created: latency-svc-qwcrp Jun 22 21:13:40.424: INFO: Got endpoints: latency-svc-qwcrp [100.820727ms] Jun 22 21:13:40.443: INFO: Created: latency-svc-84q5z Jun 22 21:13:40.504: INFO: Got endpoints: latency-svc-84q5z [181.172742ms] Jun 22 21:13:40.519: INFO: Created: latency-svc-qxtwz Jun 22 21:13:40.533: INFO: Got endpoints: latency-svc-qxtwz [210.202339ms] Jun 22 21:13:40.579: INFO: Created: latency-svc-pwdlf Jun 22 21:13:40.592: INFO: Got endpoints: latency-svc-pwdlf [269.33631ms] Jun 22 21:13:40.642: INFO: Created: latency-svc-4z9jr Jun 22 21:13:40.645: INFO: Got endpoints: latency-svc-4z9jr [322.533817ms] Jun 22 21:13:40.679: INFO: Created: latency-svc-f55m5 Jun 22 21:13:40.689: INFO: Got endpoints: latency-svc-f55m5 [366.004674ms] Jun 22 21:13:40.711: INFO: Created: latency-svc-hqvln Jun 22 21:13:40.804: INFO: Got endpoints: latency-svc-hqvln [480.681419ms] Jun 22 21:13:40.833: INFO: Created: latency-svc-qxjkh Jun 22 21:13:40.857: INFO: Got endpoints: latency-svc-qxjkh [534.459352ms] Jun 22 21:13:40.875: INFO: Created: latency-svc-xs4td Jun 22 21:13:40.888: INFO: Got endpoints: latency-svc-xs4td [564.611297ms] Jun 22 21:13:40.942: INFO: Created: latency-svc-z2m4c Jun 22 21:13:40.945: INFO: Got endpoints: latency-svc-z2m4c [621.976976ms] Jun 22 21:13:41.037: INFO: Created: latency-svc-h4bsq Jun 22 21:13:41.079: INFO: Got endpoints: latency-svc-h4bsq [756.126816ms] Jun 22 21:13:41.097: INFO: Created: latency-svc-bsdlg Jun 22 21:13:41.109: INFO: Got endpoints: latency-svc-bsdlg [786.167043ms] Jun 22 21:13:41.143: INFO: Created: latency-svc-np6t6 Jun 22 21:13:41.158: INFO: Got endpoints: latency-svc-np6t6 [834.368162ms] Jun 22 21:13:41.223: INFO: Created: latency-svc-7hwhs Jun 22 21:13:41.227: INFO: Got endpoints: latency-svc-7hwhs [903.809041ms] Jun 22 21:13:41.256: INFO: Created: latency-svc-swdhb Jun 22 21:13:41.278: INFO: Got endpoints: latency-svc-swdhb [890.612196ms] Jun 22 21:13:41.312: INFO: Created: latency-svc-grddm Jun 22 21:13:41.350: INFO: Got endpoints: latency-svc-grddm [926.752626ms] Jun 22 21:13:41.373: INFO: Created: latency-svc-brv7f Jun 22 21:13:41.390: INFO: Got endpoints: latency-svc-brv7f [885.584797ms] Jun 22 21:13:41.430: INFO: Created: latency-svc-nr8jx Jun 22 21:13:41.492: INFO: Got endpoints: latency-svc-nr8jx [959.212926ms] Jun 22 21:13:41.511: INFO: Created: latency-svc-c5zlj Jun 22 21:13:41.531: INFO: Got endpoints: latency-svc-c5zlj [939.130202ms] Jun 22 21:13:41.587: INFO: Created: latency-svc-vkhdk Jun 22 21:13:41.618: INFO: Got endpoints: latency-svc-vkhdk [972.579201ms] Jun 22 21:13:41.641: INFO: Created: latency-svc-q5x6j Jun 22 21:13:41.676: INFO: Got endpoints: latency-svc-q5x6j [987.130399ms] Jun 22 21:13:41.751: INFO: Created: latency-svc-9t5jx Jun 22 21:13:41.753: INFO: Got endpoints: latency-svc-9t5jx [949.229219ms] Jun 22 21:13:41.805: INFO: Created: latency-svc-km7pv Jun 22 21:13:41.814: INFO: Got endpoints: latency-svc-km7pv [956.394783ms] Jun 22 21:13:41.832: INFO: Created: latency-svc-pknq4 Jun 22 21:13:41.845: INFO: Got endpoints: latency-svc-pknq4 [956.91075ms] Jun 22 21:13:41.904: INFO: Created: latency-svc-r5mtb Jun 22 21:13:41.918: INFO: Got endpoints: latency-svc-r5mtb [973.215359ms] Jun 22 21:13:41.943: INFO: Created: latency-svc-7lb7d Jun 22 21:13:41.955: INFO: Got endpoints: latency-svc-7lb7d [875.197225ms] Jun 22 21:13:41.978: INFO: Created: latency-svc-tc8sq Jun 22 21:13:42.013: INFO: Got endpoints: latency-svc-tc8sq [904.256601ms] Jun 22 21:13:42.037: INFO: Created: latency-svc-smgh9 Jun 22 21:13:42.051: INFO: Got endpoints: latency-svc-smgh9 [893.564194ms] Jun 22 21:13:42.072: INFO: Created: latency-svc-58kgx Jun 22 21:13:42.081: INFO: Got endpoints: latency-svc-58kgx [854.245003ms] Jun 22 21:13:42.151: INFO: Created: latency-svc-8xnck Jun 22 21:13:42.154: INFO: Got endpoints: latency-svc-8xnck [875.912311ms] Jun 22 21:13:42.301: INFO: Created: latency-svc-psfvb Jun 22 21:13:42.315: INFO: Got endpoints: latency-svc-psfvb [964.897523ms] Jun 22 21:13:42.333: INFO: Created: latency-svc-sfwg9 Jun 22 21:13:42.346: INFO: Got endpoints: latency-svc-sfwg9 [956.157343ms] Jun 22 21:13:42.375: INFO: Created: latency-svc-5kn55 Jun 22 21:13:42.388: INFO: Got endpoints: latency-svc-5kn55 [895.504901ms] Jun 22 21:13:42.439: INFO: Created: latency-svc-ftnq7 Jun 22 21:13:42.443: INFO: Got endpoints: latency-svc-ftnq7 [911.30427ms] Jun 22 21:13:42.468: INFO: Created: latency-svc-47mv5 Jun 22 21:13:42.479: INFO: Got endpoints: latency-svc-47mv5 [860.400976ms] Jun 22 21:13:42.504: INFO: Created: latency-svc-7jttf Jun 22 21:13:42.515: INFO: Got endpoints: latency-svc-7jttf [838.912599ms] Jun 22 21:13:42.577: INFO: Created: latency-svc-w6wkv Jun 22 21:13:42.581: INFO: Got endpoints: latency-svc-w6wkv [827.956453ms] Jun 22 21:13:42.625: INFO: Created: latency-svc-8xcqf Jun 22 21:13:42.648: INFO: Got endpoints: latency-svc-8xcqf [834.038947ms] Jun 22 21:13:42.666: INFO: Created: latency-svc-5fndv Jun 22 21:13:42.702: INFO: Got endpoints: latency-svc-5fndv [857.440307ms] Jun 22 21:13:42.733: INFO: Created: latency-svc-88zng Jun 22 21:13:42.744: INFO: Got endpoints: latency-svc-88zng [825.825726ms] Jun 22 21:13:42.795: INFO: Created: latency-svc-zgmjq Jun 22 21:13:42.834: INFO: Got endpoints: latency-svc-zgmjq [879.125264ms] Jun 22 21:13:42.852: INFO: Created: latency-svc-9lcgg Jun 22 21:13:42.869: INFO: Got endpoints: latency-svc-9lcgg [855.895681ms] Jun 22 21:13:42.889: INFO: Created: latency-svc-xk66r Jun 22 21:13:42.907: INFO: Got endpoints: latency-svc-xk66r [855.558242ms] Jun 22 21:13:42.924: INFO: Created: latency-svc-bp485 Jun 22 21:13:43.013: INFO: Got endpoints: latency-svc-bp485 [932.11653ms] Jun 22 21:13:43.057: INFO: Created: latency-svc-lff8h Jun 22 21:13:43.080: INFO: Got endpoints: latency-svc-lff8h [926.068304ms] Jun 22 21:13:43.151: INFO: Created: latency-svc-rtdvs Jun 22 21:13:43.158: INFO: Got endpoints: latency-svc-rtdvs [842.831792ms] Jun 22 21:13:43.179: INFO: Created: latency-svc-l2jfb Jun 22 21:13:43.195: INFO: Got endpoints: latency-svc-l2jfb [848.638985ms] Jun 22 21:13:43.221: INFO: Created: latency-svc-nk4nq Jun 22 21:13:43.230: INFO: Got endpoints: latency-svc-nk4nq [842.616846ms] Jun 22 21:13:43.285: INFO: Created: latency-svc-4cr6z Jun 22 21:13:43.304: INFO: Got endpoints: latency-svc-4cr6z [861.064726ms] Jun 22 21:13:43.332: INFO: Created: latency-svc-mgx2t Jun 22 21:13:43.345: INFO: Got endpoints: latency-svc-mgx2t [866.410811ms] Jun 22 21:13:43.365: INFO: Created: latency-svc-sn9xq Jun 22 21:13:43.382: INFO: Got endpoints: latency-svc-sn9xq [866.964619ms] Jun 22 21:13:43.454: INFO: Created: latency-svc-mhxp5 Jun 22 21:13:43.501: INFO: Got endpoints: latency-svc-mhxp5 [919.869267ms] Jun 22 21:13:43.501: INFO: Created: latency-svc-dzhm6 Jun 22 21:13:43.515: INFO: Got endpoints: latency-svc-dzhm6 [867.258425ms] Jun 22 21:13:43.542: INFO: Created: latency-svc-h9twx Jun 22 21:13:43.582: INFO: Got endpoints: latency-svc-h9twx [880.068535ms] Jun 22 21:13:43.623: INFO: Created: latency-svc-gvdnx Jun 22 21:13:43.635: INFO: Got endpoints: latency-svc-gvdnx [890.650748ms] Jun 22 21:13:43.676: INFO: Created: latency-svc-9sp5w Jun 22 21:13:43.714: INFO: Got endpoints: latency-svc-9sp5w [879.996222ms] Jun 22 21:13:43.740: INFO: Created: latency-svc-jskrm Jun 22 21:13:43.757: INFO: Got endpoints: latency-svc-jskrm [887.761026ms] Jun 22 21:13:43.788: INFO: Created: latency-svc-vmh6g Jun 22 21:13:43.811: INFO: Got endpoints: latency-svc-vmh6g [904.457356ms] Jun 22 21:13:43.851: INFO: Created: latency-svc-795sb Jun 22 21:13:43.866: INFO: Got endpoints: latency-svc-795sb [852.857045ms] Jun 22 21:13:43.917: INFO: Created: latency-svc-qxlxl Jun 22 21:13:43.932: INFO: Got endpoints: latency-svc-qxlxl [851.388765ms] Jun 22 21:13:43.978: INFO: Created: latency-svc-h7kn7 Jun 22 21:13:43.980: INFO: Got endpoints: latency-svc-h7kn7 [821.736629ms] Jun 22 21:13:44.028: INFO: Created: latency-svc-s6q2l Jun 22 21:13:44.057: INFO: Got endpoints: latency-svc-s6q2l [862.70681ms] Jun 22 21:13:44.115: INFO: Created: latency-svc-cjctz Jun 22 21:13:44.126: INFO: Got endpoints: latency-svc-cjctz [895.751395ms] Jun 22 21:13:44.184: INFO: Created: latency-svc-t5wnq Jun 22 21:13:44.197: INFO: Got endpoints: latency-svc-t5wnq [892.647774ms] Jun 22 21:13:44.271: INFO: Created: latency-svc-d2qqc Jun 22 21:13:44.319: INFO: Created: latency-svc-l29wq Jun 22 21:13:44.319: INFO: Got endpoints: latency-svc-d2qqc [973.679476ms] Jun 22 21:13:44.335: INFO: Got endpoints: latency-svc-l29wq [952.952747ms] Jun 22 21:13:44.370: INFO: Created: latency-svc-t8r4q Jun 22 21:13:44.439: INFO: Got endpoints: latency-svc-t8r4q [938.011503ms] Jun 22 21:13:44.474: INFO: Created: latency-svc-72n4p Jun 22 21:13:44.492: INFO: Got endpoints: latency-svc-72n4p [976.250557ms] Jun 22 21:13:44.517: INFO: Created: latency-svc-bnk4l Jun 22 21:13:44.528: INFO: Got endpoints: latency-svc-bnk4l [945.630709ms] Jun 22 21:13:44.592: INFO: Created: latency-svc-5ncvz Jun 22 21:13:44.607: INFO: Got endpoints: latency-svc-5ncvz [971.842657ms] Jun 22 21:13:44.654: INFO: Created: latency-svc-zd2qr Jun 22 21:13:44.666: INFO: Got endpoints: latency-svc-zd2qr [952.704809ms] Jun 22 21:13:44.726: INFO: Created: latency-svc-5wt9v Jun 22 21:13:44.760: INFO: Got endpoints: latency-svc-5wt9v [1.002422842s] Jun 22 21:13:44.850: INFO: Created: latency-svc-5vxdb Jun 22 21:13:44.856: INFO: Got endpoints: latency-svc-5vxdb [1.044417538s] Jun 22 21:13:44.895: INFO: Created: latency-svc-m8p5p Jun 22 21:13:44.908: INFO: Got endpoints: latency-svc-m8p5p [1.04141796s] Jun 22 21:13:44.933: INFO: Created: latency-svc-lncjr Jun 22 21:13:44.942: INFO: Got endpoints: latency-svc-lncjr [1.010365522s] Jun 22 21:13:44.983: INFO: Created: latency-svc-z2lsj Jun 22 21:13:44.987: INFO: Got endpoints: latency-svc-z2lsj [1.006582008s] Jun 22 21:13:45.018: INFO: Created: latency-svc-8tw5n Jun 22 21:13:45.033: INFO: Got endpoints: latency-svc-8tw5n [975.062632ms] Jun 22 21:13:45.075: INFO: Created: latency-svc-gkmc7 Jun 22 21:13:45.117: INFO: Got endpoints: latency-svc-gkmc7 [991.008145ms] Jun 22 21:13:45.141: INFO: Created: latency-svc-wfkd5 Jun 22 21:13:45.153: INFO: Got endpoints: latency-svc-wfkd5 [956.714443ms] Jun 22 21:13:45.180: INFO: Created: latency-svc-w9glh Jun 22 21:13:45.196: INFO: Got endpoints: latency-svc-w9glh [876.725616ms] Jun 22 21:13:45.265: INFO: Created: latency-svc-8ptpt Jun 22 21:13:45.270: INFO: Got endpoints: latency-svc-8ptpt [935.22552ms] Jun 22 21:13:45.309: INFO: Created: latency-svc-7wtqz Jun 22 21:13:45.322: INFO: Got endpoints: latency-svc-7wtqz [883.107827ms] Jun 22 21:13:45.344: INFO: Created: latency-svc-ls48g Jun 22 21:13:45.352: INFO: Got endpoints: latency-svc-ls48g [860.819461ms] Jun 22 21:13:45.402: INFO: Created: latency-svc-szfsp Jun 22 21:13:45.407: INFO: Got endpoints: latency-svc-szfsp [878.618734ms] Jun 22 21:13:45.444: INFO: Created: latency-svc-kzm94 Jun 22 21:13:45.461: INFO: Got endpoints: latency-svc-kzm94 [854.269334ms] Jun 22 21:13:45.500: INFO: Created: latency-svc-pw24d Jun 22 21:13:45.555: INFO: Got endpoints: latency-svc-pw24d [887.924808ms] Jun 22 21:13:45.594: INFO: Created: latency-svc-jjvj6 Jun 22 21:13:45.611: INFO: Got endpoints: latency-svc-jjvj6 [851.621169ms] Jun 22 21:13:45.638: INFO: Created: latency-svc-spgvz Jun 22 21:13:45.678: INFO: Got endpoints: latency-svc-spgvz [821.886634ms] Jun 22 21:13:45.695: INFO: Created: latency-svc-v5xxt Jun 22 21:13:45.710: INFO: Got endpoints: latency-svc-v5xxt [802.655254ms] Jun 22 21:13:45.755: INFO: Created: latency-svc-x6pqp Jun 22 21:13:45.803: INFO: Got endpoints: latency-svc-x6pqp [861.210797ms] Jun 22 21:13:45.828: INFO: Created: latency-svc-8gjvv Jun 22 21:13:45.840: INFO: Got endpoints: latency-svc-8gjvv [853.466492ms] Jun 22 21:13:45.896: INFO: Created: latency-svc-djb26 Jun 22 21:13:45.923: INFO: Got endpoints: latency-svc-djb26 [890.701324ms] Jun 22 21:13:45.950: INFO: Created: latency-svc-m8fkj Jun 22 21:13:45.978: INFO: Got endpoints: latency-svc-m8fkj [860.520999ms] Jun 22 21:13:46.023: INFO: Created: latency-svc-jpqjp Jun 22 21:13:46.063: INFO: Got endpoints: latency-svc-jpqjp [909.742345ms] Jun 22 21:13:46.078: INFO: Created: latency-svc-bngdn Jun 22 21:13:46.126: INFO: Got endpoints: latency-svc-bngdn [930.255483ms] Jun 22 21:13:46.619: INFO: Created: latency-svc-zfrjz Jun 22 21:13:46.626: INFO: Got endpoints: latency-svc-zfrjz [1.355648938s] Jun 22 21:13:46.924: INFO: Created: latency-svc-hknwg Jun 22 21:13:46.943: INFO: Got endpoints: latency-svc-hknwg [1.620746572s] Jun 22 21:13:47.063: INFO: Created: latency-svc-9qwhl Jun 22 21:13:47.072: INFO: Got endpoints: latency-svc-9qwhl [1.71993776s] Jun 22 21:13:47.290: INFO: Created: latency-svc-dv5ck Jun 22 21:13:47.307: INFO: Got endpoints: latency-svc-dv5ck [1.900043887s] Jun 22 21:13:47.481: INFO: Created: latency-svc-6x9h8 Jun 22 21:13:47.536: INFO: Got endpoints: latency-svc-6x9h8 [2.074410685s] Jun 22 21:13:47.672: INFO: Created: latency-svc-hxzsr Jun 22 21:13:47.677: INFO: Got endpoints: latency-svc-hxzsr [2.122937754s] Jun 22 21:13:47.764: INFO: Created: latency-svc-jhs5h Jun 22 21:13:47.923: INFO: Got endpoints: latency-svc-jhs5h [2.311948023s] Jun 22 21:13:47.957: INFO: Created: latency-svc-n2w4c Jun 22 21:13:47.973: INFO: Got endpoints: latency-svc-n2w4c [2.295737768s] Jun 22 21:13:48.080: INFO: Created: latency-svc-qx5w8 Jun 22 21:13:48.087: INFO: Got endpoints: latency-svc-qx5w8 [2.376342633s] Jun 22 21:13:48.287: INFO: Created: latency-svc-qmjrb Jun 22 21:13:48.297: INFO: Got endpoints: latency-svc-qmjrb [2.494057517s] Jun 22 21:13:48.332: INFO: Created: latency-svc-rkx2c Jun 22 21:13:48.339: INFO: Got endpoints: latency-svc-rkx2c [2.49902571s] Jun 22 21:13:48.706: INFO: Created: latency-svc-s2r9f Jun 22 21:13:48.828: INFO: Created: latency-svc-jtn6f Jun 22 21:13:48.828: INFO: Got endpoints: latency-svc-s2r9f [2.904754149s] Jun 22 21:13:48.867: INFO: Got endpoints: latency-svc-jtn6f [2.889108756s] Jun 22 21:13:49.087: INFO: Created: latency-svc-h2zfg Jun 22 21:13:49.090: INFO: Got endpoints: latency-svc-h2zfg [3.027168997s] Jun 22 21:13:49.316: INFO: Created: latency-svc-wjvhb Jun 22 21:13:49.346: INFO: Got endpoints: latency-svc-wjvhb [3.22006961s] Jun 22 21:13:49.409: INFO: Created: latency-svc-7lc62 Jun 22 21:13:49.438: INFO: Got endpoints: latency-svc-7lc62 [2.812340644s] Jun 22 21:13:49.711: INFO: Created: latency-svc-68gxs Jun 22 21:13:49.870: INFO: Got endpoints: latency-svc-68gxs [2.926741935s] Jun 22 21:13:49.942: INFO: Created: latency-svc-dbjpz Jun 22 21:13:50.121: INFO: Got endpoints: latency-svc-dbjpz [3.048961302s] Jun 22 21:13:50.338: INFO: Created: latency-svc-zw5vz Jun 22 21:13:50.343: INFO: Got endpoints: latency-svc-zw5vz [3.036506863s] Jun 22 21:13:50.397: INFO: Created: latency-svc-z5h9j Jun 22 21:13:50.409: INFO: Got endpoints: latency-svc-z5h9j [2.873727428s] Jun 22 21:13:50.429: INFO: Created: latency-svc-w2ckz Jun 22 21:13:50.523: INFO: Got endpoints: latency-svc-w2ckz [2.845203472s] Jun 22 21:13:50.797: INFO: Created: latency-svc-955xc Jun 22 21:13:50.804: INFO: Got endpoints: latency-svc-955xc [2.881084694s] Jun 22 21:13:50.936: INFO: Created: latency-svc-jmw9p Jun 22 21:13:50.957: INFO: Got endpoints: latency-svc-jmw9p [2.983885606s] Jun 22 21:13:51.032: INFO: Created: latency-svc-bffnj Jun 22 21:13:51.104: INFO: Got endpoints: latency-svc-bffnj [3.01713288s] Jun 22 21:13:51.340: INFO: Created: latency-svc-zpwql Jun 22 21:13:51.409: INFO: Got endpoints: latency-svc-zpwql [3.11154718s] Jun 22 21:13:51.571: INFO: Created: latency-svc-8mkpt Jun 22 21:13:51.575: INFO: Got endpoints: latency-svc-8mkpt [3.235766093s] Jun 22 21:13:51.652: INFO: Created: latency-svc-stl6n Jun 22 21:13:51.775: INFO: Got endpoints: latency-svc-stl6n [2.946865794s] Jun 22 21:13:51.813: INFO: Created: latency-svc-lzdkk Jun 22 21:13:51.830: INFO: Got endpoints: latency-svc-lzdkk [2.962570707s] Jun 22 21:13:51.864: INFO: Created: latency-svc-h5nhj Jun 22 21:13:51.929: INFO: Got endpoints: latency-svc-h5nhj [2.839093081s] Jun 22 21:13:51.931: INFO: Created: latency-svc-sprpb Jun 22 21:13:51.938: INFO: Got endpoints: latency-svc-sprpb [2.592012234s] Jun 22 21:13:51.969: INFO: Created: latency-svc-kgv4m Jun 22 21:13:51.992: INFO: Got endpoints: latency-svc-kgv4m [2.553925171s] Jun 22 21:13:52.073: INFO: Created: latency-svc-5d79f Jun 22 21:13:52.076: INFO: Got endpoints: latency-svc-5d79f [2.206052582s] Jun 22 21:13:52.101: INFO: Created: latency-svc-h9g8c Jun 22 21:13:52.119: INFO: Got endpoints: latency-svc-h9g8c [1.997091392s] Jun 22 21:13:52.137: INFO: Created: latency-svc-2cccq Jun 22 21:13:52.149: INFO: Got endpoints: latency-svc-2cccq [1.805395144s] Jun 22 21:13:52.217: INFO: Created: latency-svc-zjbg9 Jun 22 21:13:52.220: INFO: Got endpoints: latency-svc-zjbg9 [1.810550539s] Jun 22 21:13:52.243: INFO: Created: latency-svc-xfkdv Jun 22 21:13:52.258: INFO: Got endpoints: latency-svc-xfkdv [1.735218926s] Jun 22 21:13:52.281: INFO: Created: latency-svc-4gbzt Jun 22 21:13:52.307: INFO: Got endpoints: latency-svc-4gbzt [1.502812781s] Jun 22 21:13:52.392: INFO: Created: latency-svc-b7kmf Jun 22 21:13:52.394: INFO: Got endpoints: latency-svc-b7kmf [1.436395168s] Jun 22 21:13:52.422: INFO: Created: latency-svc-xs6d6 Jun 22 21:13:52.439: INFO: Got endpoints: latency-svc-xs6d6 [1.334878924s] Jun 22 21:13:52.455: INFO: Created: latency-svc-r2kdr Jun 22 21:13:52.468: INFO: Got endpoints: latency-svc-r2kdr [1.05911969s] Jun 22 21:13:52.491: INFO: Created: latency-svc-b9lz8 Jun 22 21:13:52.533: INFO: Got endpoints: latency-svc-b9lz8 [957.740217ms] Jun 22 21:13:52.533: INFO: Created: latency-svc-qp8lg Jun 22 21:13:52.541: INFO: Got endpoints: latency-svc-qp8lg [766.167958ms] Jun 22 21:13:52.579: INFO: Created: latency-svc-gs7xj Jun 22 21:13:52.620: INFO: Got endpoints: latency-svc-gs7xj [790.724579ms] Jun 22 21:13:52.677: INFO: Created: latency-svc-q7724 Jun 22 21:13:52.694: INFO: Got endpoints: latency-svc-q7724 [764.632632ms] Jun 22 21:13:52.719: INFO: Created: latency-svc-c7wbb Jun 22 21:13:52.736: INFO: Got endpoints: latency-svc-c7wbb [797.897171ms] Jun 22 21:13:52.765: INFO: Created: latency-svc-d8xkq Jun 22 21:13:52.828: INFO: Got endpoints: latency-svc-d8xkq [835.293139ms] Jun 22 21:13:52.830: INFO: Created: latency-svc-hrc9z Jun 22 21:13:52.851: INFO: Got endpoints: latency-svc-hrc9z [774.757598ms] Jun 22 21:13:52.875: INFO: Created: latency-svc-r9zfp Jun 22 21:13:52.887: INFO: Got endpoints: latency-svc-r9zfp [768.259025ms] Jun 22 21:13:52.911: INFO: Created: latency-svc-tdsfs Jun 22 21:13:52.960: INFO: Got endpoints: latency-svc-tdsfs [810.76493ms] Jun 22 21:13:52.974: INFO: Created: latency-svc-9lp5s Jun 22 21:13:52.990: INFO: Got endpoints: latency-svc-9lp5s [769.771942ms] Jun 22 21:13:53.023: INFO: Created: latency-svc-wnk9n Jun 22 21:13:53.056: INFO: Got endpoints: latency-svc-wnk9n [797.560928ms] Jun 22 21:13:53.127: INFO: Created: latency-svc-fxscn Jun 22 21:13:53.140: INFO: Got endpoints: latency-svc-fxscn [832.848239ms] Jun 22 21:13:53.167: INFO: Created: latency-svc-fwg4f Jun 22 21:13:53.182: INFO: Got endpoints: latency-svc-fwg4f [788.600825ms] Jun 22 21:13:53.235: INFO: Created: latency-svc-8jslz Jun 22 21:13:53.247: INFO: Got endpoints: latency-svc-8jslz [808.128148ms] Jun 22 21:13:53.275: INFO: Created: latency-svc-ctgtl Jun 22 21:13:53.291: INFO: Got endpoints: latency-svc-ctgtl [823.025677ms] Jun 22 21:13:53.316: INFO: Created: latency-svc-bg8dx Jun 22 21:13:53.327: INFO: Got endpoints: latency-svc-bg8dx [794.121711ms] Jun 22 21:13:53.372: INFO: Created: latency-svc-v6q6w Jun 22 21:13:53.375: INFO: Got endpoints: latency-svc-v6q6w [833.674509ms] Jun 22 21:13:53.403: INFO: Created: latency-svc-5bqqt Jun 22 21:13:53.418: INFO: Got endpoints: latency-svc-5bqqt [797.426994ms] Jun 22 21:13:53.450: INFO: Created: latency-svc-8d6sb Jun 22 21:13:53.466: INFO: Got endpoints: latency-svc-8d6sb [772.203382ms] Jun 22 21:13:53.516: INFO: Created: latency-svc-kc96k Jun 22 21:13:53.557: INFO: Got endpoints: latency-svc-kc96k [820.852339ms] Jun 22 21:13:53.593: INFO: Created: latency-svc-2m78k Jun 22 21:13:53.610: INFO: Got endpoints: latency-svc-2m78k [782.412607ms] Jun 22 21:13:53.662: INFO: Created: latency-svc-5h6s4 Jun 22 21:13:53.665: INFO: Got endpoints: latency-svc-5h6s4 [814.32369ms] Jun 22 21:13:53.692: INFO: Created: latency-svc-h6n98 Jun 22 21:13:53.705: INFO: Got endpoints: latency-svc-h6n98 [818.384233ms] Jun 22 21:13:53.734: INFO: Created: latency-svc-422fh Jun 22 21:13:53.748: INFO: Got endpoints: latency-svc-422fh [788.620541ms] Jun 22 21:13:53.804: INFO: Created: latency-svc-xrbz4 Jun 22 21:13:53.828: INFO: Got endpoints: latency-svc-xrbz4 [838.051915ms] Jun 22 21:13:53.884: INFO: Created: latency-svc-6fg6t Jun 22 21:13:53.931: INFO: Got endpoints: latency-svc-6fg6t [875.129359ms] Jun 22 21:13:53.956: INFO: Created: latency-svc-9fv8m Jun 22 21:13:53.971: INFO: Got endpoints: latency-svc-9fv8m [830.332346ms] Jun 22 21:13:53.988: INFO: Created: latency-svc-6wp5k Jun 22 21:13:54.001: INFO: Got endpoints: latency-svc-6wp5k [818.774734ms] Jun 22 21:13:54.087: INFO: Created: latency-svc-phf6c Jun 22 21:13:54.090: INFO: Got endpoints: latency-svc-phf6c [842.352976ms] Jun 22 21:13:54.154: INFO: Created: latency-svc-b8k55 Jun 22 21:13:54.175: INFO: Got endpoints: latency-svc-b8k55 [883.376887ms] Jun 22 21:13:54.247: INFO: Created: latency-svc-bn667 Jun 22 21:13:54.251: INFO: Got endpoints: latency-svc-bn667 [924.268188ms] Jun 22 21:13:54.283: INFO: Created: latency-svc-x7wwh Jun 22 21:13:54.296: INFO: Got endpoints: latency-svc-x7wwh [920.649091ms] Jun 22 21:13:54.315: INFO: Created: latency-svc-7m9tc Jun 22 21:13:54.332: INFO: Got endpoints: latency-svc-7m9tc [914.050961ms] Jun 22 21:13:54.379: INFO: Created: latency-svc-snxb9 Jun 22 21:13:54.381: INFO: Got endpoints: latency-svc-snxb9 [914.967907ms] Jun 22 21:13:54.408: INFO: Created: latency-svc-zplkf Jun 22 21:13:54.422: INFO: Got endpoints: latency-svc-zplkf [865.454392ms] Jun 22 21:13:54.451: INFO: Created: latency-svc-fv54g Jun 22 21:13:54.523: INFO: Got endpoints: latency-svc-fv54g [912.356537ms] Jun 22 21:13:54.537: INFO: Created: latency-svc-5p56d Jun 22 21:13:54.549: INFO: Got endpoints: latency-svc-5p56d [884.093367ms] Jun 22 21:13:54.579: INFO: Created: latency-svc-6fgpq Jun 22 21:13:54.615: INFO: Got endpoints: latency-svc-6fgpq [909.846459ms] Jun 22 21:13:54.668: INFO: Created: latency-svc-t2lqn Jun 22 21:13:54.671: INFO: Got endpoints: latency-svc-t2lqn [922.947565ms] Jun 22 21:13:54.717: INFO: Created: latency-svc-hp4df Jun 22 21:13:54.742: INFO: Got endpoints: latency-svc-hp4df [913.972679ms] Jun 22 21:13:54.765: INFO: Created: latency-svc-z6n7d Jun 22 21:13:54.810: INFO: Got endpoints: latency-svc-z6n7d [879.119295ms] Jun 22 21:13:54.826: INFO: Created: latency-svc-qs6jl Jun 22 21:13:54.858: INFO: Got endpoints: latency-svc-qs6jl [887.575794ms] Jun 22 21:13:54.896: INFO: Created: latency-svc-rnjlt Jun 22 21:13:54.908: INFO: Got endpoints: latency-svc-rnjlt [906.57846ms] Jun 22 21:13:54.969: INFO: Created: latency-svc-z7d7m Jun 22 21:13:54.986: INFO: Got endpoints: latency-svc-z7d7m [896.019062ms] Jun 22 21:13:55.005: INFO: Created: latency-svc-kx5tn Jun 22 21:13:55.021: INFO: Got endpoints: latency-svc-kx5tn [846.762568ms] Jun 22 21:13:55.039: INFO: Created: latency-svc-vs99f Jun 22 21:13:55.073: INFO: Got endpoints: latency-svc-vs99f [821.645159ms] Jun 22 21:13:55.105: INFO: Created: latency-svc-h8klv Jun 22 21:13:55.131: INFO: Got endpoints: latency-svc-h8klv [835.11685ms] Jun 22 21:13:55.173: INFO: Created: latency-svc-wck5h Jun 22 21:13:55.233: INFO: Got endpoints: latency-svc-wck5h [901.108884ms] Jun 22 21:13:55.235: INFO: Created: latency-svc-xwqmf Jun 22 21:13:55.250: INFO: Got endpoints: latency-svc-xwqmf [868.921334ms] Jun 22 21:13:55.273: INFO: Created: latency-svc-xrlgj Jun 22 21:13:55.286: INFO: Got endpoints: latency-svc-xrlgj [863.998368ms] Jun 22 21:13:55.309: INFO: Created: latency-svc-648km Jun 22 21:13:55.391: INFO: Got endpoints: latency-svc-648km [867.953385ms] Jun 22 21:13:55.392: INFO: Created: latency-svc-5lwfs Jun 22 21:13:55.413: INFO: Got endpoints: latency-svc-5lwfs [864.118407ms] Jun 22 21:13:55.453: INFO: Created: latency-svc-j4szb Jun 22 21:13:55.546: INFO: Got endpoints: latency-svc-j4szb [930.747585ms] Jun 22 21:13:55.557: INFO: Created: latency-svc-drrf2 Jun 22 21:13:55.576: INFO: Got endpoints: latency-svc-drrf2 [904.339236ms] Jun 22 21:13:55.603: INFO: Created: latency-svc-9sqf8 Jun 22 21:13:55.618: INFO: Got endpoints: latency-svc-9sqf8 [876.339381ms] Jun 22 21:13:55.639: INFO: Created: latency-svc-fhb62 Jun 22 21:13:55.672: INFO: Got endpoints: latency-svc-fhb62 [861.761289ms] Jun 22 21:13:55.689: INFO: Created: latency-svc-jrttl Jun 22 21:13:55.702: INFO: Got endpoints: latency-svc-jrttl [844.056045ms] Jun 22 21:13:55.750: INFO: Created: latency-svc-68rgv Jun 22 21:13:55.769: INFO: Got endpoints: latency-svc-68rgv [861.153354ms] Jun 22 21:13:55.840: INFO: Created: latency-svc-rvqg6 Jun 22 21:13:55.847: INFO: Got endpoints: latency-svc-rvqg6 [860.962799ms] Jun 22 21:13:55.873: INFO: Created: latency-svc-pttvm Jun 22 21:13:55.883: INFO: Got endpoints: latency-svc-pttvm [861.856683ms] Jun 22 21:13:55.905: INFO: Created: latency-svc-jnhp6 Jun 22 21:13:55.919: INFO: Got endpoints: latency-svc-jnhp6 [846.42686ms] Jun 22 21:13:55.978: INFO: Created: latency-svc-cf9gq Jun 22 21:13:56.017: INFO: Got endpoints: latency-svc-cf9gq [885.905486ms] Jun 22 21:13:56.017: INFO: Created: latency-svc-cwshn Jun 22 21:13:56.032: INFO: Got endpoints: latency-svc-cwshn [799.197198ms] Jun 22 21:13:56.134: INFO: Created: latency-svc-778gs Jun 22 21:13:56.159: INFO: Got endpoints: latency-svc-778gs [908.10203ms] Jun 22 21:13:56.159: INFO: Latencies: [64.680682ms 100.820727ms 181.172742ms 210.202339ms 269.33631ms 322.533817ms 366.004674ms 480.681419ms 534.459352ms 564.611297ms 621.976976ms 756.126816ms 764.632632ms 766.167958ms 768.259025ms 769.771942ms 772.203382ms 774.757598ms 782.412607ms 786.167043ms 788.600825ms 788.620541ms 790.724579ms 794.121711ms 797.426994ms 797.560928ms 797.897171ms 799.197198ms 802.655254ms 808.128148ms 810.76493ms 814.32369ms 818.384233ms 818.774734ms 820.852339ms 821.645159ms 821.736629ms 821.886634ms 823.025677ms 825.825726ms 827.956453ms 830.332346ms 832.848239ms 833.674509ms 834.038947ms 834.368162ms 835.11685ms 835.293139ms 838.051915ms 838.912599ms 842.352976ms 842.616846ms 842.831792ms 844.056045ms 846.42686ms 846.762568ms 848.638985ms 851.388765ms 851.621169ms 852.857045ms 853.466492ms 854.245003ms 854.269334ms 855.558242ms 855.895681ms 857.440307ms 860.400976ms 860.520999ms 860.819461ms 860.962799ms 861.064726ms 861.153354ms 861.210797ms 861.761289ms 861.856683ms 862.70681ms 863.998368ms 864.118407ms 865.454392ms 866.410811ms 866.964619ms 867.258425ms 867.953385ms 868.921334ms 875.129359ms 875.197225ms 875.912311ms 876.339381ms 876.725616ms 878.618734ms 879.119295ms 879.125264ms 879.996222ms 880.068535ms 883.107827ms 883.376887ms 884.093367ms 885.584797ms 885.905486ms 887.575794ms 887.761026ms 887.924808ms 890.612196ms 890.650748ms 890.701324ms 892.647774ms 893.564194ms 895.504901ms 895.751395ms 896.019062ms 901.108884ms 903.809041ms 904.256601ms 904.339236ms 904.457356ms 906.57846ms 908.10203ms 909.742345ms 909.846459ms 911.30427ms 912.356537ms 913.972679ms 914.050961ms 914.967907ms 919.869267ms 920.649091ms 922.947565ms 924.268188ms 926.068304ms 926.752626ms 930.255483ms 930.747585ms 932.11653ms 935.22552ms 938.011503ms 939.130202ms 945.630709ms 949.229219ms 952.704809ms 952.952747ms 956.157343ms 956.394783ms 956.714443ms 956.91075ms 957.740217ms 959.212926ms 964.897523ms 971.842657ms 972.579201ms 973.215359ms 973.679476ms 975.062632ms 976.250557ms 987.130399ms 991.008145ms 1.002422842s 1.006582008s 1.010365522s 1.04141796s 1.044417538s 1.05911969s 1.334878924s 1.355648938s 1.436395168s 1.502812781s 1.620746572s 1.71993776s 1.735218926s 1.805395144s 1.810550539s 1.900043887s 1.997091392s 2.074410685s 2.122937754s 2.206052582s 2.295737768s 2.311948023s 2.376342633s 2.494057517s 2.49902571s 2.553925171s 2.592012234s 2.812340644s 2.839093081s 2.845203472s 2.873727428s 2.881084694s 2.889108756s 2.904754149s 2.926741935s 2.946865794s 2.962570707s 2.983885606s 3.01713288s 3.027168997s 3.036506863s 3.048961302s 3.11154718s 3.22006961s 3.235766093s] Jun 22 21:13:56.159: INFO: 50 %ile: 887.761026ms Jun 22 21:13:56.159: INFO: 90 %ile: 2.553925171s Jun 22 21:13:56.159: INFO: 99 %ile: 3.22006961s Jun 22 21:13:56.159: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:13:56.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4441" for this suite. • [SLOW TEST:20.198 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":16,"skipped":378,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:13:56.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:14:02.341: INFO: DNS probes using dns-test-866cd013-29a5-42e1-90bd-50e2189c0de3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:14:10.586: INFO: File wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:10.602: INFO: File jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:10.602: INFO: Lookups using dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 failed for: [wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local] Jun 22 21:14:15.637: INFO: File wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:15.649: INFO: File jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:15.649: INFO: Lookups using dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 failed for: [wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local] Jun 22 21:14:20.606: INFO: File wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:20.632: INFO: File jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:20.632: INFO: Lookups using dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 failed for: [wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local] Jun 22 21:14:25.643: INFO: File wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains '' instead of 'bar.example.com.' Jun 22 21:14:25.647: INFO: File jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:25.647: INFO: Lookups using dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 failed for: [wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local] Jun 22 21:14:30.607: INFO: File wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local from pod dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 21:14:30.611: INFO: Lookups using dns-8423/dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 failed for: [wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local] Jun 22 21:14:35.611: INFO: DNS probes using dns-test-4a2bf1cf-feca-4cc8-9d32-d2ee41937171 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8423.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8423.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:14:44.412: INFO: DNS probes using dns-test-46ff2964-d873-480a-b71c-c8f61898a55d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:14:44.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8423" for this suite. • [SLOW TEST:48.365 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":17,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:14:44.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:14:45.050: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:14:49.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5638" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:14:49.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 22 21:14:49.347: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 21:14:49.367: INFO: Waiting for terminating namespaces to be deleted... Jun 22 21:14:49.369: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 22 21:14:49.383: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.383: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:14:49.383: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.383: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:14:49.383: INFO: pod-exec-websocket-593cf7eb-370f-4104-ba53-677204d24269 from pods-5638 started at 2020-06-22 21:14:45 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.383: INFO: Container main ready: true, restart count 0 Jun 22 21:14:49.383: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 22 21:14:49.411: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.411: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:14:49.411: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.411: INFO: Container kube-bench ready: false, restart count 0 Jun 22 21:14:49.411: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.411: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:14:49.411: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 22 21:14:49.411: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0f5b5a84-5baf-49c8-b41b-4b77f6cf34ab 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0f5b5a84-5baf-49c8-b41b-4b77f6cf34ab off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0f5b5a84-5baf-49c8-b41b-4b77f6cf34ab [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:15:05.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8634" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.391 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":19,"skipped":491,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:15:05.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f2398b87-3e09-4f0a-8b3d-e96a0bc73663 STEP: Creating a pod to test consume secrets Jun 22 21:15:05.801: INFO: Waiting up to 5m0s for pod "pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48" in namespace "secrets-40" to be "success or failure" Jun 22 21:15:05.836: INFO: Pod "pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48": Phase="Pending", Reason="", readiness=false. Elapsed: 34.699065ms Jun 22 21:15:07.840: INFO: Pod "pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039104621s Jun 22 21:15:09.844: INFO: Pod "pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043116149s STEP: Saw pod success Jun 22 21:15:09.845: INFO: Pod "pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48" satisfied condition "success or failure" Jun 22 21:15:09.848: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48 container secret-volume-test: STEP: delete the pod Jun 22 21:15:09.882: INFO: Waiting for pod pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48 to disappear Jun 22 21:15:09.891: INFO: Pod pod-secrets-0bf87a64-3459-4268-83e6-1d5cf27aee48 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:15:09.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-40" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:15:09.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:16:10.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4230" for this suite. • [SLOW TEST:60.165 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":518,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:16:10.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:16:10.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6256" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":22,"skipped":522,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:16:10.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 22 21:16:14.856: INFO: Successfully updated pod "labelsupdate86f86f62-9572-4907-8bbc-e833f02c3c0b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:16:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8845" for this suite. • [SLOW TEST:8.654 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":525,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:16:18.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8650 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 22 21:16:18.969: INFO: Found 0 stateful pods, waiting for 3 Jun 22 21:16:28.973: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:16:28.973: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:16:28.973: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 22 21:16:38.974: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:16:38.974: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:16:38.974: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 22 21:16:39.002: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 22 21:16:49.040: INFO: Updating stateful set ss2 Jun 22 21:16:49.052: INFO: Waiting for Pod statefulset-8650/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 22 21:16:59.234: INFO: Found 2 stateful pods, waiting for 3 Jun 22 21:17:09.239: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:17:09.239: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:17:09.239: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 22 21:17:09.259: INFO: Updating stateful set ss2 Jun 22 21:17:09.292: INFO: Waiting for Pod statefulset-8650/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 22 21:17:19.318: INFO: Updating stateful set ss2 Jun 22 21:17:19.333: INFO: Waiting for StatefulSet statefulset-8650/ss2 to complete update Jun 22 21:17:19.333: INFO: Waiting for Pod statefulset-8650/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 22 21:17:29.340: INFO: Waiting for StatefulSet statefulset-8650/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 21:17:39.340: INFO: Deleting all statefulset in ns statefulset-8650 Jun 22 21:17:39.342: INFO: Scaling statefulset ss2 to 0 Jun 22 21:17:59.361: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:17:59.364: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:17:59.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8650" for this suite. • [SLOW TEST:100.500 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":24,"skipped":529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:17:59.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 22 21:17:59.472: INFO: Waiting up to 5m0s for pod "pod-05d2dec3-a04a-405e-be85-d99adb8a827d" in namespace "emptydir-2394" to be "success or failure" Jun 22 21:17:59.515: INFO: Pod "pod-05d2dec3-a04a-405e-be85-d99adb8a827d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.619251ms Jun 22 21:18:01.520: INFO: Pod "pod-05d2dec3-a04a-405e-be85-d99adb8a827d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047677256s Jun 22 21:18:03.524: INFO: Pod "pod-05d2dec3-a04a-405e-be85-d99adb8a827d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051765664s STEP: Saw pod success Jun 22 21:18:03.524: INFO: Pod "pod-05d2dec3-a04a-405e-be85-d99adb8a827d" satisfied condition "success or failure" Jun 22 21:18:03.528: INFO: Trying to get logs from node jerma-worker2 pod pod-05d2dec3-a04a-405e-be85-d99adb8a827d container test-container: STEP: delete the pod Jun 22 21:18:03.583: INFO: Waiting for pod pod-05d2dec3-a04a-405e-be85-d99adb8a827d to disappear Jun 22 21:18:03.602: INFO: Pod pod-05d2dec3-a04a-405e-be85-d99adb8a827d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:18:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2394" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":571,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:18:03.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:18:03.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9173' Jun 22 21:18:03.835: INFO: stderr: "" Jun 22 21:18:03.835: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 22 21:18:08.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9173 -o json' Jun 22 21:18:08.988: INFO: stderr: "" Jun 22 21:18:08.988: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-22T21:18:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9173\",\n \"resourceVersion\": \"26477748\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9173/pods/e2e-test-httpd-pod\",\n \"uid\": \"ba51269f-f8ef-4c3d-a537-d21be722c051\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tj72b\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tj72b\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tj72b\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T21:18:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T21:18:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T21:18:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T21:18:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://269b9b7e81a287faafa770a9c4d2c5cabd1710eebe9786e4c773bbc0d41535e1\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-22T21:18:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.223\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.223\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-22T21:18:03Z\"\n }\n}\n" STEP: replace the image in the pod Jun 22 21:18:08.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9173' Jun 22 21:18:09.298: INFO: stderr: "" Jun 22 21:18:09.298: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jun 22 21:18:09.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9173' Jun 22 21:18:19.531: INFO: stderr: "" Jun 22 21:18:19.531: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:18:19.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9173" for this suite. • [SLOW TEST:15.928 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":26,"skipped":576,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:18:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 22 21:18:19.584: INFO: namespace kubectl-3917 Jun 22 21:18:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3917' Jun 22 21:18:19.836: INFO: stderr: "" Jun 22 21:18:19.836: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 22 21:18:20.840: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:20.840: INFO: Found 0 / 1 Jun 22 21:18:21.842: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:21.842: INFO: Found 0 / 1 Jun 22 21:18:22.840: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:22.840: INFO: Found 0 / 1 Jun 22 21:18:23.841: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:23.841: INFO: Found 1 / 1 Jun 22 21:18:23.841: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 21:18:23.845: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:23.845: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 21:18:23.845: INFO: wait on agnhost-master startup in kubectl-3917 Jun 22 21:18:23.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-49m5g agnhost-master --namespace=kubectl-3917' Jun 22 21:18:23.976: INFO: stderr: "" Jun 22 21:18:23.976: INFO: stdout: "Paused\n" STEP: exposing RC Jun 22 21:18:23.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3917' Jun 22 21:18:24.140: INFO: stderr: "" Jun 22 21:18:24.140: INFO: stdout: "service/rm2 exposed\n" Jun 22 21:18:24.145: INFO: Service rm2 in namespace kubectl-3917 found. STEP: exposing service Jun 22 21:18:26.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3917' Jun 22 21:18:26.319: INFO: stderr: "" Jun 22 21:18:26.319: INFO: stdout: "service/rm3 exposed\n" Jun 22 21:18:26.328: INFO: Service rm3 in namespace kubectl-3917 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:18:28.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3917" for this suite. • [SLOW TEST:8.833 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":27,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:18:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:18:28.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:18:30.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728457508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:18:33.950: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:18:34.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4927" for this suite. STEP: Destroying namespace "webhook-4927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.155 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":28,"skipped":599,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:18:34.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:18:34.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-467' Jun 22 21:18:35.159: INFO: stderr: "" Jun 22 21:18:35.159: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 22 21:18:35.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-467' Jun 22 21:18:35.791: INFO: stderr: "" Jun 22 21:18:35.791: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 22 21:18:36.798: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:36.798: INFO: Found 0 / 1 Jun 22 21:18:37.821: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:37.821: INFO: Found 0 / 1 Jun 22 21:18:38.815: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:38.815: INFO: Found 1 / 1 Jun 22 21:18:38.815: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 21:18:38.821: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 21:18:38.821: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 21:18:38.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-gnkjk --namespace=kubectl-467' Jun 22 21:18:38.974: INFO: stderr: "" Jun 22 21:18:38.974: INFO: stdout: "Name: agnhost-master-gnkjk\nNamespace: kubectl-467\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Mon, 22 Jun 2020 21:18:35 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.225\nIPs:\n IP: 10.244.2.225\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9e5f1661dfb75805833395cabc89e527f6a11b77b76bbb5139b0f4ddb6ce5bd0\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 22 Jun 2020 21:18:37 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tnlkk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tnlkk:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tnlkk\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-467/agnhost-master-gnkjk to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Jun 22 21:18:38.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-467' Jun 22 21:18:39.099: INFO: stderr: "" Jun 22 21:18:39.099: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-467\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-gnkjk\n" Jun 22 21:18:39.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-467' Jun 22 21:18:39.229: INFO: stderr: "" Jun 22 21:18:39.229: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-467\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.73.163\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.225:6379\nSession Affinity: None\nEvents: \n" Jun 22 21:18:39.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Jun 22 21:18:39.440: INFO: stderr: "" Jun 22 21:18:39.440: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 22 Jun 2020 21:18:38 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 22 Jun 2020 21:16:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 22 Jun 2020 21:16:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 22 Jun 2020 21:16:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 22 Jun 2020 21:16:23 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 99d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 99d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 99d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 99d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 99d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 99d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 99d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 99d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 99d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 22 21:18:39.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-467' Jun 22 21:18:39.594: INFO: stderr: "" Jun 22 21:18:39.594: INFO: stdout: "Name: kubectl-467\nLabels: e2e-framework=kubectl\n e2e-run=6426321e-d338-493d-9440-c43ed0d034a7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:18:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-467" for this suite. • [SLOW TEST:5.075 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":29,"skipped":601,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:18:39.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7236 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 22 21:18:39.726: INFO: Found 0 stateful pods, waiting for 3 Jun 22 21:18:49.756: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:18:49.756: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:18:49.756: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 22 21:18:59.731: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:18:59.731: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:18:59.731: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:18:59.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7236 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:19:00.031: INFO: stderr: "I0622 21:18:59.881992 383 log.go:172] (0xc000b6d290) (0xc000a68780) Create stream\nI0622 21:18:59.882051 383 log.go:172] (0xc000b6d290) (0xc000a68780) Stream added, broadcasting: 1\nI0622 21:18:59.886656 383 log.go:172] (0xc000b6d290) Reply frame received for 1\nI0622 21:18:59.886698 383 log.go:172] (0xc000b6d290) (0xc0003c9400) Create stream\nI0622 21:18:59.886708 383 log.go:172] (0xc000b6d290) (0xc0003c9400) Stream added, broadcasting: 3\nI0622 21:18:59.887761 383 log.go:172] (0xc000b6d290) Reply frame received for 3\nI0622 21:18:59.887793 383 log.go:172] (0xc000b6d290) (0xc000737a40) Create stream\nI0622 21:18:59.887804 383 log.go:172] (0xc000b6d290) (0xc000737a40) Stream added, broadcasting: 5\nI0622 21:18:59.888770 383 log.go:172] (0xc000b6d290) Reply frame received for 5\nI0622 21:18:59.980164 383 log.go:172] (0xc000b6d290) Data frame received for 5\nI0622 21:18:59.980188 383 log.go:172] (0xc000737a40) (5) Data frame handling\nI0622 21:18:59.980204 383 log.go:172] (0xc000737a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:19:00.022472 383 log.go:172] (0xc000b6d290) Data frame received for 3\nI0622 21:19:00.022507 383 log.go:172] (0xc0003c9400) (3) Data frame handling\nI0622 21:19:00.022533 383 log.go:172] (0xc0003c9400) (3) Data frame sent\nI0622 21:19:00.022695 383 log.go:172] (0xc000b6d290) Data frame received for 5\nI0622 21:19:00.022728 383 log.go:172] (0xc000737a40) (5) Data frame handling\nI0622 21:19:00.022776 383 log.go:172] (0xc000b6d290) Data frame received for 3\nI0622 21:19:00.022813 383 log.go:172] (0xc0003c9400) (3) Data frame handling\nI0622 21:19:00.024412 383 log.go:172] (0xc000b6d290) Data frame received for 1\nI0622 21:19:00.024446 383 log.go:172] (0xc000a68780) (1) Data frame handling\nI0622 21:19:00.024466 383 log.go:172] (0xc000a68780) (1) Data frame sent\nI0622 21:19:00.024481 383 log.go:172] (0xc000b6d290) (0xc000a68780) Stream removed, broadcasting: 1\nI0622 21:19:00.024522 383 log.go:172] (0xc000b6d290) Go away received\nI0622 21:19:00.024861 383 log.go:172] (0xc000b6d290) (0xc000a68780) Stream removed, broadcasting: 1\nI0622 21:19:00.024879 383 log.go:172] (0xc000b6d290) (0xc0003c9400) Stream removed, broadcasting: 3\nI0622 21:19:00.024890 383 log.go:172] (0xc000b6d290) (0xc000737a40) Stream removed, broadcasting: 5\n" Jun 22 21:19:00.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:19:00.031: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 22 21:19:10.065: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 22 21:19:20.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7236 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:19:20.304: INFO: stderr: "I0622 21:19:20.213886 405 log.go:172] (0xc000b34dc0) (0xc0009b8280) Create stream\nI0622 21:19:20.213942 405 log.go:172] (0xc000b34dc0) (0xc0009b8280) Stream added, broadcasting: 1\nI0622 21:19:20.218266 405 log.go:172] (0xc000b34dc0) Reply frame received for 1\nI0622 21:19:20.218347 405 log.go:172] (0xc000b34dc0) (0xc00060c640) Create stream\nI0622 21:19:20.218365 405 log.go:172] (0xc000b34dc0) (0xc00060c640) Stream added, broadcasting: 3\nI0622 21:19:20.219305 405 log.go:172] (0xc000b34dc0) Reply frame received for 3\nI0622 21:19:20.219348 405 log.go:172] (0xc000b34dc0) (0xc00075f400) Create stream\nI0622 21:19:20.219369 405 log.go:172] (0xc000b34dc0) (0xc00075f400) Stream added, broadcasting: 5\nI0622 21:19:20.220161 405 log.go:172] (0xc000b34dc0) Reply frame received for 5\nI0622 21:19:20.295939 405 log.go:172] (0xc000b34dc0) Data frame received for 3\nI0622 21:19:20.295993 405 log.go:172] (0xc00060c640) (3) Data frame handling\nI0622 21:19:20.296012 405 log.go:172] (0xc00060c640) (3) Data frame sent\nI0622 21:19:20.296027 405 log.go:172] (0xc000b34dc0) Data frame received for 3\nI0622 21:19:20.296039 405 log.go:172] (0xc00060c640) (3) Data frame handling\nI0622 21:19:20.296081 405 log.go:172] (0xc000b34dc0) Data frame received for 5\nI0622 21:19:20.296114 405 log.go:172] (0xc00075f400) (5) Data frame handling\nI0622 21:19:20.296133 405 log.go:172] (0xc00075f400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:19:20.296146 405 log.go:172] (0xc000b34dc0) Data frame received for 5\nI0622 21:19:20.296182 405 log.go:172] (0xc00075f400) (5) Data frame handling\nI0622 21:19:20.298053 405 log.go:172] (0xc000b34dc0) Data frame received for 1\nI0622 21:19:20.298073 405 log.go:172] (0xc0009b8280) (1) Data frame handling\nI0622 21:19:20.298084 405 log.go:172] (0xc0009b8280) (1) Data frame sent\nI0622 21:19:20.298100 405 log.go:172] (0xc000b34dc0) (0xc0009b8280) Stream removed, broadcasting: 1\nI0622 21:19:20.298164 405 log.go:172] (0xc000b34dc0) Go away received\nI0622 21:19:20.298469 405 log.go:172] (0xc000b34dc0) (0xc0009b8280) Stream removed, broadcasting: 1\nI0622 21:19:20.298491 405 log.go:172] (0xc000b34dc0) (0xc00060c640) Stream removed, broadcasting: 3\nI0622 21:19:20.298501 405 log.go:172] (0xc000b34dc0) (0xc00075f400) Stream removed, broadcasting: 5\n" Jun 22 21:19:20.304: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:19:20.304: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:19:50.325: INFO: Waiting for StatefulSet statefulset-7236/ss2 to complete update STEP: Rolling back to a previous revision Jun 22 21:20:00.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7236 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:20:00.598: INFO: stderr: "I0622 21:20:00.468937 425 log.go:172] (0xc00010b600) (0xc000667a40) Create stream\nI0622 21:20:00.468988 425 log.go:172] (0xc00010b600) (0xc000667a40) Stream added, broadcasting: 1\nI0622 21:20:00.471465 425 log.go:172] (0xc00010b600) Reply frame received for 1\nI0622 21:20:00.471514 425 log.go:172] (0xc00010b600) (0xc000b86000) Create stream\nI0622 21:20:00.471528 425 log.go:172] (0xc00010b600) (0xc000b86000) Stream added, broadcasting: 3\nI0622 21:20:00.472478 425 log.go:172] (0xc00010b600) Reply frame received for 3\nI0622 21:20:00.472523 425 log.go:172] (0xc00010b600) (0xc000aa2000) Create stream\nI0622 21:20:00.472537 425 log.go:172] (0xc00010b600) (0xc000aa2000) Stream added, broadcasting: 5\nI0622 21:20:00.473590 425 log.go:172] (0xc00010b600) Reply frame received for 5\nI0622 21:20:00.562472 425 log.go:172] (0xc00010b600) Data frame received for 5\nI0622 21:20:00.562504 425 log.go:172] (0xc000aa2000) (5) Data frame handling\nI0622 21:20:00.562524 425 log.go:172] (0xc000aa2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:20:00.589993 425 log.go:172] (0xc00010b600) Data frame received for 3\nI0622 21:20:00.590027 425 log.go:172] (0xc000b86000) (3) Data frame handling\nI0622 21:20:00.590049 425 log.go:172] (0xc000b86000) (3) Data frame sent\nI0622 21:20:00.590063 425 log.go:172] (0xc00010b600) Data frame received for 3\nI0622 21:20:00.590086 425 log.go:172] (0xc000b86000) (3) Data frame handling\nI0622 21:20:00.590114 425 log.go:172] (0xc00010b600) Data frame received for 5\nI0622 21:20:00.590129 425 log.go:172] (0xc000aa2000) (5) Data frame handling\nI0622 21:20:00.591869 425 log.go:172] (0xc00010b600) Data frame received for 1\nI0622 21:20:00.591894 425 log.go:172] (0xc000667a40) (1) Data frame handling\nI0622 21:20:00.591911 425 log.go:172] (0xc000667a40) (1) Data frame sent\nI0622 21:20:00.591927 425 log.go:172] (0xc00010b600) (0xc000667a40) Stream removed, broadcasting: 1\nI0622 21:20:00.591950 425 log.go:172] (0xc00010b600) Go away received\nI0622 21:20:00.592320 425 log.go:172] (0xc00010b600) (0xc000667a40) Stream removed, broadcasting: 1\nI0622 21:20:00.592344 425 log.go:172] (0xc00010b600) (0xc000b86000) Stream removed, broadcasting: 3\nI0622 21:20:00.592355 425 log.go:172] (0xc00010b600) (0xc000aa2000) Stream removed, broadcasting: 5\n" Jun 22 21:20:00.598: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:20:00.598: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 21:20:10.629: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 22 21:20:20.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7236 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:20:20.923: INFO: stderr: "I0622 21:20:20.817523 446 log.go:172] (0xc0006666e0) (0xc000656000) Create stream\nI0622 21:20:20.817587 446 log.go:172] (0xc0006666e0) (0xc000656000) Stream added, broadcasting: 1\nI0622 21:20:20.820164 446 log.go:172] (0xc0006666e0) Reply frame received for 1\nI0622 21:20:20.820210 446 log.go:172] (0xc0006666e0) (0xc0006cda40) Create stream\nI0622 21:20:20.820226 446 log.go:172] (0xc0006666e0) (0xc0006cda40) Stream added, broadcasting: 3\nI0622 21:20:20.821884 446 log.go:172] (0xc0006666e0) Reply frame received for 3\nI0622 21:20:20.821920 446 log.go:172] (0xc0006666e0) (0xc0006cdc20) Create stream\nI0622 21:20:20.821935 446 log.go:172] (0xc0006666e0) (0xc0006cdc20) Stream added, broadcasting: 5\nI0622 21:20:20.822962 446 log.go:172] (0xc0006666e0) Reply frame received for 5\nI0622 21:20:20.916104 446 log.go:172] (0xc0006666e0) Data frame received for 3\nI0622 21:20:20.916156 446 log.go:172] (0xc0006cda40) (3) Data frame handling\nI0622 21:20:20.916203 446 log.go:172] (0xc0006cda40) (3) Data frame sent\nI0622 21:20:20.916232 446 log.go:172] (0xc0006666e0) Data frame received for 5\nI0622 21:20:20.916240 446 log.go:172] (0xc0006cdc20) (5) Data frame handling\nI0622 21:20:20.916247 446 log.go:172] (0xc0006cdc20) (5) Data frame sent\nI0622 21:20:20.916254 446 log.go:172] (0xc0006666e0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:20:20.916274 446 log.go:172] (0xc0006666e0) Data frame received for 3\nI0622 21:20:20.916326 446 log.go:172] (0xc0006cda40) (3) Data frame handling\nI0622 21:20:20.916365 446 log.go:172] (0xc0006cdc20) (5) Data frame handling\nI0622 21:20:20.917911 446 log.go:172] (0xc0006666e0) Data frame received for 1\nI0622 21:20:20.917926 446 log.go:172] (0xc000656000) (1) Data frame handling\nI0622 21:20:20.917939 446 log.go:172] (0xc000656000) (1) Data frame sent\nI0622 21:20:20.917983 446 log.go:172] (0xc0006666e0) (0xc000656000) Stream removed, broadcasting: 1\nI0622 21:20:20.918018 446 log.go:172] (0xc0006666e0) Go away received\nI0622 21:20:20.918443 446 log.go:172] (0xc0006666e0) (0xc000656000) Stream removed, broadcasting: 1\nI0622 21:20:20.918467 446 log.go:172] (0xc0006666e0) (0xc0006cda40) Stream removed, broadcasting: 3\nI0622 21:20:20.918478 446 log.go:172] (0xc0006666e0) (0xc0006cdc20) Stream removed, broadcasting: 5\n" Jun 22 21:20:20.924: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:20:20.924: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:20:30.941: INFO: Waiting for StatefulSet statefulset-7236/ss2 to complete update Jun 22 21:20:30.941: INFO: Waiting for Pod statefulset-7236/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 22 21:20:30.941: INFO: Waiting for Pod statefulset-7236/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 22 21:20:40.949: INFO: Waiting for StatefulSet statefulset-7236/ss2 to complete update Jun 22 21:20:40.949: INFO: Waiting for Pod statefulset-7236/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 21:20:50.948: INFO: Deleting all statefulset in ns statefulset-7236 Jun 22 21:20:50.950: INFO: Scaling statefulset ss2 to 0 Jun 22 21:21:10.982: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:21:10.985: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:10.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7236" for this suite. • [SLOW TEST:151.403 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":30,"skipped":607,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:11.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2929/configmap-test-5c510486-7740-44d2-a928-017542e4e728 STEP: Creating a pod to test consume configMaps Jun 22 21:21:11.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d" in namespace "configmap-2929" to be "success or failure" Jun 22 21:21:11.069: INFO: Pod "pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240127ms Jun 22 21:21:13.072: INFO: Pod "pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007145026s Jun 22 21:21:15.076: INFO: Pod "pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011095791s STEP: Saw pod success Jun 22 21:21:15.076: INFO: Pod "pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d" satisfied condition "success or failure" Jun 22 21:21:15.080: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d container env-test: STEP: delete the pod Jun 22 21:21:15.143: INFO: Waiting for pod pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d to disappear Jun 22 21:21:15.158: INFO: Pod pod-configmaps-125253f3-5dc0-437d-be0a-5c46ce3d514d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2929" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:15.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jun 22 21:21:15.234: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix410155573/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:15.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4471" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":32,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:15.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 22 21:21:15.382: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6887 /api/v1/namespaces/watch-6887/configmaps/e2e-watch-test-watch-closed 7e27b062-7e7f-4c71-a8e0-21c935406271 26478815 0 2020-06-22 21:21:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 21:21:15.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6887 /api/v1/namespaces/watch-6887/configmaps/e2e-watch-test-watch-closed 7e27b062-7e7f-4c71-a8e0-21c935406271 26478817 0 2020-06-22 21:21:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 22 21:21:15.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6887 /api/v1/namespaces/watch-6887/configmaps/e2e-watch-test-watch-closed 7e27b062-7e7f-4c71-a8e0-21c935406271 26478818 0 2020-06-22 21:21:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 21:21:15.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6887 /api/v1/namespaces/watch-6887/configmaps/e2e-watch-test-watch-closed 7e27b062-7e7f-4c71-a8e0-21c935406271 26478819 0 2020-06-22 21:21:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:15.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6887" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":33,"skipped":674,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:15.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jun 22 21:21:15.681: INFO: Waiting up to 5m0s for pod "client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286" in namespace "containers-2917" to be "success or failure" Jun 22 21:21:15.694: INFO: Pod "client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286": Phase="Pending", Reason="", readiness=false. Elapsed: 12.308287ms Jun 22 21:21:17.703: INFO: Pod "client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021675531s Jun 22 21:21:19.709: INFO: Pod "client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027581719s STEP: Saw pod success Jun 22 21:21:19.709: INFO: Pod "client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286" satisfied condition "success or failure" Jun 22 21:21:19.712: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286 container test-container: STEP: delete the pod Jun 22 21:21:19.765: INFO: Waiting for pod client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286 to disappear Jun 22 21:21:19.772: INFO: Pod client-containers-b5f77cf9-e209-4cd9-8021-477b216cc286 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:19.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2917" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":676,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:19.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5125/configmap-test-56419a09-a8be-48dd-b6da-ae191bf4fb05 STEP: Creating a pod to test consume configMaps Jun 22 21:21:19.841: INFO: Waiting up to 5m0s for pod "pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598" in namespace "configmap-5125" to be "success or failure" Jun 22 21:21:19.843: INFO: Pod "pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871221ms Jun 22 21:21:21.878: INFO: Pod "pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036984103s Jun 22 21:21:23.882: INFO: Pod "pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041237152s STEP: Saw pod success Jun 22 21:21:23.882: INFO: Pod "pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598" satisfied condition "success or failure" Jun 22 21:21:23.885: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598 container env-test: STEP: delete the pod Jun 22 21:21:23.948: INFO: Waiting for pod pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598 to disappear Jun 22 21:21:23.962: INFO: Pod pod-configmaps-136ed044-9f0a-4230-a478-ada9181e6598 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:23.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5125" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":684,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:23.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:21:24.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15" in namespace "projected-1687" to be "success or failure" Jun 22 21:21:24.058: INFO: Pod "downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05019ms Jun 22 21:21:26.062: INFO: Pod "downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008347592s Jun 22 21:21:28.081: INFO: Pod "downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02756065s STEP: Saw pod success Jun 22 21:21:28.081: INFO: Pod "downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15" satisfied condition "success or failure" Jun 22 21:21:28.084: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15 container client-container: STEP: delete the pod Jun 22 21:21:28.100: INFO: Waiting for pod downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15 to disappear Jun 22 21:21:28.105: INFO: Pod downwardapi-volume-1dbe5b0e-2f39-4568-96da-671a44225d15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:28.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1687" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":689,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:28.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 22 21:21:28.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6272 /api/v1/namespaces/watch-6272/configmaps/e2e-watch-test-resource-version 1984a8c2-b781-4e48-91b7-87a015c1efb1 26479006 0 2020-06-22 21:21:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 21:21:28.235: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6272 /api/v1/namespaces/watch-6272/configmaps/e2e-watch-test-resource-version 1984a8c2-b781-4e48-91b7-87a015c1efb1 26479007 0 2020-06-22 21:21:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:28.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6272" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":37,"skipped":691,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:28.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-372d52c7-acf9-4105-a749-d8e8c819c6d3 STEP: Creating a pod to test consume configMaps Jun 22 21:21:28.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7" in namespace "projected-5000" to be "success or failure" Jun 22 21:21:28.321: INFO: Pod "pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.698326ms Jun 22 21:21:30.326: INFO: Pod "pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008071351s Jun 22 21:21:32.331: INFO: Pod "pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01315564s STEP: Saw pod success Jun 22 21:21:32.331: INFO: Pod "pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7" satisfied condition "success or failure" Jun 22 21:21:32.334: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7 container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:21:32.371: INFO: Waiting for pod pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7 to disappear Jun 22 21:21:32.383: INFO: Pod pod-projected-configmaps-9cbaf220-f7e6-4a0d-8118-69a7c7dca8a7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5000" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":697,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:32.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:36.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1161" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":703,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:36.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-73afb10a-93d5-4fac-8c68-2238ac5a0983 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-73afb10a-93d5-4fac-8c68-2238ac5a0983 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:21:42.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-51" for this suite. • [SLOW TEST:6.187 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:21:42.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7195, will wait for the garbage collector to delete the pods Jun 22 21:21:46.836: INFO: Deleting Job.batch foo took: 6.88967ms Jun 22 21:21:47.136: INFO: Terminating Job.batch foo pods took: 300.346908ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:22:29.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7195" for this suite. • [SLOW TEST:46.980 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":41,"skipped":744,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:22:29.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 21:22:29.734: INFO: Waiting up to 5m0s for pod "pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802" in namespace "emptydir-7015" to be "success or failure" Jun 22 21:22:29.742: INFO: Pod "pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097477ms Jun 22 21:22:31.752: INFO: Pod "pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018613797s Jun 22 21:22:33.758: INFO: Pod "pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02377465s STEP: Saw pod success Jun 22 21:22:33.758: INFO: Pod "pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802" satisfied condition "success or failure" Jun 22 21:22:33.762: INFO: Trying to get logs from node jerma-worker pod pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802 container test-container: STEP: delete the pod Jun 22 21:22:33.856: INFO: Waiting for pod pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802 to disappear Jun 22 21:22:33.861: INFO: Pod pod-d8d06e10-853b-4ee1-80a3-3f8ccb751802 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:22:33.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7015" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:22:33.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-eddb2bbb-5090-45c9-bcb9-3e579441de0b STEP: Creating a pod to test consume configMaps Jun 22 21:22:34.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b" in namespace "configmap-5155" to be "success or failure" Jun 22 21:22:34.036: INFO: Pod "pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.577948ms Jun 22 21:22:36.039: INFO: Pod "pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019383082s Jun 22 21:22:38.136: INFO: Pod "pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116146633s STEP: Saw pod success Jun 22 21:22:38.136: INFO: Pod "pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b" satisfied condition "success or failure" Jun 22 21:22:38.140: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b container configmap-volume-test: STEP: delete the pod Jun 22 21:22:38.205: INFO: Waiting for pod pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b to disappear Jun 22 21:22:38.233: INFO: Pod pod-configmaps-ef9f4fba-14b9-4fc2-ac0d-c2ab4422ea9b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:22:38.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5155" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":811,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:22:38.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-8234650c-21c8-44cd-9d9b-0b64b4fd8cc7 STEP: Creating a pod to test consume configMaps Jun 22 21:22:38.411: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568" in namespace "projected-9648" to be "success or failure" Jun 22 21:22:38.448: INFO: Pod "pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568": Phase="Pending", Reason="", readiness=false. Elapsed: 36.599547ms Jun 22 21:22:40.452: INFO: Pod "pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041014737s Jun 22 21:22:42.457: INFO: Pod "pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045846672s STEP: Saw pod success Jun 22 21:22:42.457: INFO: Pod "pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568" satisfied condition "success or failure" Jun 22 21:22:42.460: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568 container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:22:42.498: INFO: Waiting for pod pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568 to disappear Jun 22 21:22:42.511: INFO: Pod pod-projected-configmaps-38755186-3dad-466c-9f48-ac9738dcc568 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:22:42.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9648" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":823,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:22:42.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-531 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 21:22:42.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 21:23:08.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.146:8080/dial?request=hostname&protocol=http&host=10.244.1.145&port=8080&tries=1'] Namespace:pod-network-test-531 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 21:23:08.717: INFO: >>> kubeConfig: /root/.kube/config I0622 21:23:08.743144 6 log.go:172] (0xc001bf4000) (0xc00222a460) Create stream I0622 21:23:08.743184 6 log.go:172] (0xc001bf4000) (0xc00222a460) Stream added, broadcasting: 1 I0622 21:23:08.746027 6 log.go:172] (0xc001bf4000) Reply frame received for 1 I0622 21:23:08.746076 6 log.go:172] (0xc001bf4000) (0xc002310000) Create stream I0622 21:23:08.746090 6 log.go:172] (0xc001bf4000) (0xc002310000) Stream added, broadcasting: 3 I0622 21:23:08.747340 6 log.go:172] (0xc001bf4000) Reply frame received for 3 I0622 21:23:08.747400 6 log.go:172] (0xc001bf4000) (0xc00294a000) Create stream I0622 21:23:08.747419 6 log.go:172] (0xc001bf4000) (0xc00294a000) Stream added, broadcasting: 5 I0622 21:23:08.748568 6 log.go:172] (0xc001bf4000) Reply frame received for 5 I0622 21:23:08.889615 6 log.go:172] (0xc001bf4000) Data frame received for 3 I0622 21:23:08.889659 6 log.go:172] (0xc002310000) (3) Data frame handling I0622 21:23:08.889694 6 log.go:172] (0xc002310000) (3) Data frame sent I0622 21:23:08.890370 6 log.go:172] (0xc001bf4000) Data frame received for 5 I0622 21:23:08.890474 6 log.go:172] (0xc00294a000) (5) Data frame handling I0622 21:23:08.890521 6 log.go:172] (0xc001bf4000) Data frame received for 3 I0622 21:23:08.890540 6 log.go:172] (0xc002310000) (3) Data frame handling I0622 21:23:08.892237 6 log.go:172] (0xc001bf4000) Data frame received for 1 I0622 21:23:08.892268 6 log.go:172] (0xc00222a460) (1) Data frame handling I0622 21:23:08.892301 6 log.go:172] (0xc00222a460) (1) Data frame sent I0622 21:23:08.892334 6 log.go:172] (0xc001bf4000) (0xc00222a460) Stream removed, broadcasting: 1 I0622 21:23:08.892477 6 log.go:172] (0xc001bf4000) Go away received I0622 21:23:08.892749 6 log.go:172] (0xc001bf4000) (0xc00222a460) Stream removed, broadcasting: 1 I0622 21:23:08.892772 6 log.go:172] (0xc001bf4000) (0xc002310000) Stream removed, broadcasting: 3 I0622 21:23:08.892788 6 log.go:172] (0xc001bf4000) (0xc00294a000) Stream removed, broadcasting: 5 Jun 22 21:23:08.892: INFO: Waiting for responses: map[] Jun 22 21:23:08.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.146:8080/dial?request=hostname&protocol=http&host=10.244.2.238&port=8080&tries=1'] Namespace:pod-network-test-531 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 21:23:08.903: INFO: >>> kubeConfig: /root/.kube/config I0622 21:23:08.936741 6 log.go:172] (0xc00299c790) (0xc00294a320) Create stream I0622 21:23:08.936768 6 log.go:172] (0xc00299c790) (0xc00294a320) Stream added, broadcasting: 1 I0622 21:23:08.939725 6 log.go:172] (0xc00299c790) Reply frame received for 1 I0622 21:23:08.939757 6 log.go:172] (0xc00299c790) (0xc00294a3c0) Create stream I0622 21:23:08.939774 6 log.go:172] (0xc00299c790) (0xc00294a3c0) Stream added, broadcasting: 3 I0622 21:23:08.940795 6 log.go:172] (0xc00299c790) Reply frame received for 3 I0622 21:23:08.940828 6 log.go:172] (0xc00299c790) (0xc00222a500) Create stream I0622 21:23:08.940844 6 log.go:172] (0xc00299c790) (0xc00222a500) Stream added, broadcasting: 5 I0622 21:23:08.942240 6 log.go:172] (0xc00299c790) Reply frame received for 5 I0622 21:23:09.019236 6 log.go:172] (0xc00299c790) Data frame received for 3 I0622 21:23:09.019273 6 log.go:172] (0xc00294a3c0) (3) Data frame handling I0622 21:23:09.019301 6 log.go:172] (0xc00294a3c0) (3) Data frame sent I0622 21:23:09.020001 6 log.go:172] (0xc00299c790) Data frame received for 5 I0622 21:23:09.020059 6 log.go:172] (0xc00222a500) (5) Data frame handling I0622 21:23:09.020100 6 log.go:172] (0xc00299c790) Data frame received for 3 I0622 21:23:09.020124 6 log.go:172] (0xc00294a3c0) (3) Data frame handling I0622 21:23:09.021944 6 log.go:172] (0xc00299c790) Data frame received for 1 I0622 21:23:09.021993 6 log.go:172] (0xc00294a320) (1) Data frame handling I0622 21:23:09.022023 6 log.go:172] (0xc00294a320) (1) Data frame sent I0622 21:23:09.022053 6 log.go:172] (0xc00299c790) (0xc00294a320) Stream removed, broadcasting: 1 I0622 21:23:09.022081 6 log.go:172] (0xc00299c790) Go away received I0622 21:23:09.022247 6 log.go:172] (0xc00299c790) (0xc00294a320) Stream removed, broadcasting: 1 I0622 21:23:09.022271 6 log.go:172] (0xc00299c790) (0xc00294a3c0) Stream removed, broadcasting: 3 I0622 21:23:09.022284 6 log.go:172] (0xc00299c790) (0xc00222a500) Stream removed, broadcasting: 5 Jun 22 21:23:09.022: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:23:09.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-531" for this suite. • [SLOW TEST:26.515 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":835,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:23:09.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:23:09.180: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 22 21:23:09.242: INFO: Number of nodes with available pods: 0 Jun 22 21:23:09.242: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 22 21:23:09.315: INFO: Number of nodes with available pods: 0 Jun 22 21:23:09.315: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:10.319: INFO: Number of nodes with available pods: 0 Jun 22 21:23:10.319: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:11.320: INFO: Number of nodes with available pods: 0 Jun 22 21:23:11.320: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:12.320: INFO: Number of nodes with available pods: 1 Jun 22 21:23:12.320: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 22 21:23:12.400: INFO: Number of nodes with available pods: 1 Jun 22 21:23:12.400: INFO: Number of running nodes: 0, number of available pods: 1 Jun 22 21:23:13.404: INFO: Number of nodes with available pods: 0 Jun 22 21:23:13.404: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 22 21:23:13.434: INFO: Number of nodes with available pods: 0 Jun 22 21:23:13.434: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:14.509: INFO: Number of nodes with available pods: 0 Jun 22 21:23:14.509: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:15.526: INFO: Number of nodes with available pods: 0 Jun 22 21:23:15.526: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:16.438: INFO: Number of nodes with available pods: 0 Jun 22 21:23:16.438: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:17.438: INFO: Number of nodes with available pods: 0 Jun 22 21:23:17.438: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:18.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:18.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:19.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:19.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:20.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:20.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:21.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:21.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:22.438: INFO: Number of nodes with available pods: 0 Jun 22 21:23:22.438: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:23.438: INFO: Number of nodes with available pods: 0 Jun 22 21:23:23.438: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:24.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:24.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:25.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:25.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:26.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:26.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:27.438: INFO: Number of nodes with available pods: 0 Jun 22 21:23:27.438: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:28.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:28.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:29.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:29.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:30.439: INFO: Number of nodes with available pods: 0 Jun 22 21:23:30.439: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:31.484: INFO: Number of nodes with available pods: 0 Jun 22 21:23:31.484: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:23:32.439: INFO: Number of nodes with available pods: 1 Jun 22 21:23:32.439: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7223, will wait for the garbage collector to delete the pods Jun 22 21:23:32.504: INFO: Deleting DaemonSet.extensions daemon-set took: 6.791164ms Jun 22 21:23:32.804: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.286979ms Jun 22 21:23:39.316: INFO: Number of nodes with available pods: 0 Jun 22 21:23:39.316: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 21:23:39.319: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7223/daemonsets","resourceVersion":"26479729"},"items":null} Jun 22 21:23:39.332: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7223/pods","resourceVersion":"26479730"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:23:39.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7223" for this suite. • [SLOW TEST:30.333 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":46,"skipped":837,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:23:39.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-172e6760-a024-42fa-bd88-2fcfdf2732b6 STEP: Creating a pod to test consume secrets Jun 22 21:23:39.458: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4" in namespace "projected-393" to be "success or failure" Jun 22 21:23:39.494: INFO: Pod "pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.071478ms Jun 22 21:23:41.497: INFO: Pod "pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038944291s Jun 22 21:23:43.502: INFO: Pod "pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043681575s STEP: Saw pod success Jun 22 21:23:43.502: INFO: Pod "pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4" satisfied condition "success or failure" Jun 22 21:23:43.505: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4 container projected-secret-volume-test: STEP: delete the pod Jun 22 21:23:43.532: INFO: Waiting for pod pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4 to disappear Jun 22 21:23:43.542: INFO: Pod pod-projected-secrets-330c9c3b-3c5a-45f5-9c6f-d678655328e4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:23:43.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-393" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":843,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:23:43.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:23:43.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4712' Jun 22 21:23:46.674: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 21:23:46.674: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jun 22 21:23:46.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4712' Jun 22 21:23:46.804: INFO: stderr: "" Jun 22 21:23:46.804: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:23:46.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4712" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":48,"skipped":853,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:23:46.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:23:46.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 22 21:23:47.043: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:47Z generation:1 name:name1 resourceVersion:26479810 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fa6778d6-a339-4fad-81fc-28e4aa9fbdd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 22 21:23:57.048: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:57Z generation:1 name:name2 resourceVersion:26479854 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:13124610-21d3-4bdb-9eb7-423c73dcf3ad] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 22 21:24:07.055: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:47Z generation:2 name:name1 resourceVersion:26479886 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fa6778d6-a339-4fad-81fc-28e4aa9fbdd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 22 21:24:17.062: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:57Z generation:2 name:name2 resourceVersion:26479918 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:13124610-21d3-4bdb-9eb7-423c73dcf3ad] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 22 21:24:27.070: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:47Z generation:2 name:name1 resourceVersion:26479948 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fa6778d6-a339-4fad-81fc-28e4aa9fbdd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 22 21:24:37.078: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-22T21:23:57Z generation:2 name:name2 resourceVersion:26479978 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:13124610-21d3-4bdb-9eb7-423c73dcf3ad] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:24:47.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1837" for this suite. • [SLOW TEST:60.757 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":49,"skipped":862,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:24:47.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:24:47.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6427" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":881,"failed":0} ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:24:47.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 22 21:24:47.822: INFO: Waiting up to 5m0s for pod "downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f" in namespace "downward-api-1199" to be "success or failure" Jun 22 21:24:47.823: INFO: Pod "downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.692887ms Jun 22 21:24:49.850: INFO: Pod "downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028063514s Jun 22 21:24:51.863: INFO: Pod "downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040929115s STEP: Saw pod success Jun 22 21:24:51.863: INFO: Pod "downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f" satisfied condition "success or failure" Jun 22 21:24:51.865: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f container dapi-container: STEP: delete the pod Jun 22 21:24:51.914: INFO: Waiting for pod downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f to disappear Jun 22 21:24:51.924: INFO: Pod downward-api-b31083c1-60f0-413e-9e0d-a05c3ed6956f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:24:51.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1199" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":881,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:24:51.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jun 22 21:24:52.006: INFO: Waiting up to 5m0s for pod "client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332" in namespace "containers-7240" to be "success or failure" Jun 22 21:24:52.014: INFO: Pod "client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332": Phase="Pending", Reason="", readiness=false. Elapsed: 8.842647ms Jun 22 21:24:54.018: INFO: Pod "client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012581368s Jun 22 21:24:56.022: INFO: Pod "client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016791043s STEP: Saw pod success Jun 22 21:24:56.022: INFO: Pod "client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332" satisfied condition "success or failure" Jun 22 21:24:56.025: INFO: Trying to get logs from node jerma-worker2 pod client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332 container test-container: STEP: delete the pod Jun 22 21:24:56.084: INFO: Waiting for pod client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332 to disappear Jun 22 21:24:56.087: INFO: Pod client-containers-74a6b2cc-0aa0-41e2-b488-f0ac40b97332 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:24:56.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7240" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":886,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:24:56.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 22 21:24:56.216: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480091 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 21:24:56.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480092 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 21:24:56.216: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480093 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 22 21:25:06.247: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480141 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 21:25:06.247: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480142 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 22 21:25:06.248: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4378 /api/v1/namespaces/watch-4378/configmaps/e2e-watch-test-label-changed 333b419d-d849-475f-8099-d89eb670f6a4 26480143 0 2020-06-22 21:24:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:25:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4378" for this suite. • [SLOW TEST:10.160 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":53,"skipped":899,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:25:06.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:25:06.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9601' Jun 22 21:25:06.443: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 21:25:06.443: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jun 22 21:25:06.457: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 22 21:25:06.463: INFO: scanned /root for discovery docs: Jun 22 21:25:06.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9601' Jun 22 21:25:22.309: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 21:25:22.309: INFO: stdout: "Created e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4\nScaling up e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jun 22 21:25:22.309: INFO: stdout: "Created e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4\nScaling up e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jun 22 21:25:22.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9601' Jun 22 21:25:22.401: INFO: stderr: "" Jun 22 21:25:22.401: INFO: stdout: "e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4-g554c " Jun 22 21:25:22.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4-g554c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9601' Jun 22 21:25:22.491: INFO: stderr: "" Jun 22 21:25:22.491: INFO: stdout: "true" Jun 22 21:25:22.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4-g554c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9601' Jun 22 21:25:22.583: INFO: stderr: "" Jun 22 21:25:22.583: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jun 22 21:25:22.583: INFO: e2e-test-httpd-rc-8aec1a9f7796c4cb6f0facf80f379bb4-g554c is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jun 22 21:25:22.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9601' Jun 22 21:25:22.685: INFO: stderr: "" Jun 22 21:25:22.685: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:25:22.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9601" for this suite. • [SLOW TEST:16.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":54,"skipped":901,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:25:22.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-591f43bc-50c5-4937-bf5c-47ef206a0443 STEP: Creating a pod to test consume secrets Jun 22 21:25:22.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce" in namespace "projected-5599" to be "success or failure" Jun 22 21:25:22.800: INFO: Pod "pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024845ms Jun 22 21:25:24.804: INFO: Pod "pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010163729s Jun 22 21:25:26.809: INFO: Pod "pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014841063s STEP: Saw pod success Jun 22 21:25:26.809: INFO: Pod "pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce" satisfied condition "success or failure" Jun 22 21:25:26.812: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce container projected-secret-volume-test: STEP: delete the pod Jun 22 21:25:26.860: INFO: Waiting for pod pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce to disappear Jun 22 21:25:26.874: INFO: Pod pod-projected-secrets-6ee6d4d6-1985-41a7-9bdd-71db26fb81ce no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:25:26.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5599" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:25:26.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-61a44b96-4f96-48f4-b161-64ae59a2d58d STEP: Creating a pod to test consume configMaps Jun 22 21:25:27.006: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb" in namespace "projected-4605" to be "success or failure" Jun 22 21:25:27.033: INFO: Pod "pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.926389ms Jun 22 21:25:29.073: INFO: Pod "pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066887625s Jun 22 21:25:31.077: INFO: Pod "pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071177664s STEP: Saw pod success Jun 22 21:25:31.077: INFO: Pod "pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb" satisfied condition "success or failure" Jun 22 21:25:31.081: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:25:31.122: INFO: Waiting for pod pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb to disappear Jun 22 21:25:31.142: INFO: Pod pod-projected-configmaps-c1214678-741a-4c8f-9d06-d9510c3be8fb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:25:31.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4605" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:25:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d6f9b5a2-8ad6-4fdb-aec2-9dd93e65c3b7 STEP: Creating secret with name s-test-opt-upd-85c5bd2d-3cdb-4f22-8c1a-0e90b882d030 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d6f9b5a2-8ad6-4fdb-aec2-9dd93e65c3b7 STEP: Updating secret s-test-opt-upd-85c5bd2d-3cdb-4f22-8c1a-0e90b882d030 STEP: Creating secret with name s-test-opt-create-a706a8e0-6523-4eea-907b-e7672dcbb2de STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:26:55.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8837" for this suite. • [SLOW TEST:84.597 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":999,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:26:55.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 22 21:26:55.808: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 21:26:55.865: INFO: Waiting for terminating namespaces to be deleted... Jun 22 21:26:55.868: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 22 21:26:55.875: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.875: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:26:55.875: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.875: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:26:55.875: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 22 21:26:55.881: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.881: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:26:55.881: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.881: INFO: Container kube-bench ready: false, restart count 0 Jun 22 21:26:55.881: INFO: pod-secrets-4698d5c2-d3a6-4dc6-ab03-938a512fc37d from secrets-8837 started at 2020-06-22 21:25:31 +0000 UTC (3 container statuses recorded) Jun 22 21:26:55.881: INFO: Container creates-volume-test ready: true, restart count 0 Jun 22 21:26:55.881: INFO: Container dels-volume-test ready: true, restart count 0 Jun 22 21:26:55.881: INFO: Container upds-volume-test ready: true, restart count 0 Jun 22 21:26:55.881: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.881: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:26:55.881: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 22 21:26:55.881: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2926780e-28c0-4d5a-a35f-689dae6dc914 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2926780e-28c0-4d5a-a35f-689dae6dc914 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-2926780e-28c0-4d5a-a35f-689dae6dc914 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:06.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7527" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.334 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":58,"skipped":1017,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:06.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-11470340-eef4-4558-850b-f01c0fa80ad9 STEP: Creating a pod to test consume configMaps Jun 22 21:27:06.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c" in namespace "configmap-2934" to be "success or failure" Jun 22 21:27:06.297: INFO: Pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.74314ms Jun 22 21:27:08.301: INFO: Pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020727306s Jun 22 21:27:10.304: INFO: Pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c": Phase="Running", Reason="", readiness=true. Elapsed: 4.023942306s Jun 22 21:27:12.308: INFO: Pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027257891s STEP: Saw pod success Jun 22 21:27:12.308: INFO: Pod "pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c" satisfied condition "success or failure" Jun 22 21:27:12.310: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c container configmap-volume-test: STEP: delete the pod Jun 22 21:27:12.329: INFO: Waiting for pod pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c to disappear Jun 22 21:27:12.334: INFO: Pod pod-configmaps-1ac7624d-9395-4a8d-86b6-65ed5c6a982c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:12.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2934" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1028,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:12.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:27:12.385: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 22 21:27:14.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1673 create -f -' Jun 22 21:27:18.255: INFO: stderr: "" Jun 22 21:27:18.255: INFO: stdout: "e2e-test-crd-publish-openapi-6034-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 22 21:27:18.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1673 delete e2e-test-crd-publish-openapi-6034-crds test-cr' Jun 22 21:27:18.358: INFO: stderr: "" Jun 22 21:27:18.358: INFO: stdout: "e2e-test-crd-publish-openapi-6034-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 22 21:27:18.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1673 apply -f -' Jun 22 21:27:18.621: INFO: stderr: "" Jun 22 21:27:18.621: INFO: stdout: "e2e-test-crd-publish-openapi-6034-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 22 21:27:18.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1673 delete e2e-test-crd-publish-openapi-6034-crds test-cr' Jun 22 21:27:18.727: INFO: stderr: "" Jun 22 21:27:18.727: INFO: stdout: "e2e-test-crd-publish-openapi-6034-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 22 21:27:18.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6034-crds' Jun 22 21:27:18.962: INFO: stderr: "" Jun 22 21:27:18.962: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6034-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:21.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1673" for this suite. • [SLOW TEST:9.510 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":60,"skipped":1028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:21.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 22 21:27:21.895: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 21:27:21.927: INFO: Waiting for terminating namespaces to be deleted... Jun 22 21:27:21.930: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 22 21:27:21.934: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.934: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:27:21.934: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.934: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:27:21.934: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 22 21:27:21.940: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.940: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:27:21.940: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.940: INFO: Container kube-hunter ready: false, restart count 0 Jun 22 21:27:21.940: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.940: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:27:21.940: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 22 21:27:21.940: INFO: Container kube-bench ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jun 22 21:27:22.050: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Jun 22 21:27:22.050: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Jun 22 21:27:22.050: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Jun 22 21:27:22.050: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jun 22 21:27:22.050: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jun 22 21:27:22.055: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3.161afadc41dfc9d7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9531/filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3.161afadcb3f45acb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3.161afadd0033afa9], Reason = [Created], Message = [Created container filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3.161afadd0e2f94cf], Reason = [Started], Message = [Started container filler-pod-5d0cf335-f7bc-4c45-ac3f-2a1f8ee29fd3] STEP: Considering event: Type = [Normal], Name = [filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc.161afadc4027726e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9531/filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc.161afadc8dc9f8c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc.161afadce7d0fb10], Reason = [Created], Message = [Created container filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc] STEP: Considering event: Type = [Normal], Name = [filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc.161afadcfd378b50], Reason = [Started], Message = [Started container filler-pod-8942b6c9-3ebf-478b-978f-7f0a1375eabc] STEP: Considering event: Type = [Warning], Name = [additional-pod.161afadda8d4f89a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.161afaddacf6cce6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:29.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9531" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.389 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":61,"skipped":1077,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:29.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-4138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4138 to expose endpoints map[] Jun 22 21:27:29.377: INFO: Get endpoints failed (3.787038ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 22 21:27:30.381: INFO: successfully validated that service multi-endpoint-test in namespace services-4138 exposes endpoints map[] (1.007927418s elapsed) STEP: Creating pod pod1 in namespace services-4138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4138 to expose endpoints map[pod1:[100]] Jun 22 21:27:34.448: INFO: successfully validated that service multi-endpoint-test in namespace services-4138 exposes endpoints map[pod1:[100]] (4.05996089s elapsed) STEP: Creating pod pod2 in namespace services-4138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4138 to expose endpoints map[pod1:[100] pod2:[101]] Jun 22 21:27:37.594: INFO: successfully validated that service multi-endpoint-test in namespace services-4138 exposes endpoints map[pod1:[100] pod2:[101]] (3.14262544s elapsed) STEP: Deleting pod pod1 in namespace services-4138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4138 to expose endpoints map[pod2:[101]] Jun 22 21:27:38.617: INFO: successfully validated that service multi-endpoint-test in namespace services-4138 exposes endpoints map[pod2:[101]] (1.019104327s elapsed) STEP: Deleting pod pod2 in namespace services-4138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4138 to expose endpoints map[] Jun 22 21:27:39.703: INFO: successfully validated that service multi-endpoint-test in namespace services-4138 exposes endpoints map[] (1.080739204s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:40.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4138" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.777 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":62,"skipped":1087,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:40.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jun 22 21:27:40.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4127 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 22 21:27:40.332: INFO: stderr: "" Jun 22 21:27:40.332: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jun 22 21:27:40.332: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 22 21:27:40.332: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4127" to be "running and ready, or succeeded" Jun 22 21:27:40.366: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 34.138216ms Jun 22 21:27:42.370: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038119261s Jun 22 21:27:44.374: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.042095592s Jun 22 21:27:44.374: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 22 21:27:44.374: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 22 21:27:44.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127' Jun 22 21:27:44.500: INFO: stderr: "" Jun 22 21:27:44.500: INFO: stdout: "I0622 21:27:42.846222 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/99d 332\nI0622 21:27:43.046517 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/g4z 593\nI0622 21:27:43.246390 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/s8k8 555\nI0622 21:27:43.446410 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/c77b 375\nI0622 21:27:43.646459 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/45d 216\nI0622 21:27:43.846407 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bmg 295\nI0622 21:27:44.046460 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/dcr9 406\nI0622 21:27:44.246420 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6gzt 533\nI0622 21:27:44.446403 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/75qb 244\n" STEP: limiting log lines Jun 22 21:27:44.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127 --tail=1' Jun 22 21:27:44.614: INFO: stderr: "" Jun 22 21:27:44.614: INFO: stdout: "I0622 21:27:44.446403 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/75qb 244\n" Jun 22 21:27:44.614: INFO: got output "I0622 21:27:44.446403 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/75qb 244\n" STEP: limiting log bytes Jun 22 21:27:44.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127 --limit-bytes=1' Jun 22 21:27:44.714: INFO: stderr: "" Jun 22 21:27:44.714: INFO: stdout: "I" Jun 22 21:27:44.714: INFO: got output "I" STEP: exposing timestamps Jun 22 21:27:44.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127 --tail=1 --timestamps' Jun 22 21:27:44.824: INFO: stderr: "" Jun 22 21:27:44.824: INFO: stdout: "2020-06-22T21:27:44.646558602Z I0622 21:27:44.646374 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/pdr8 540\n" Jun 22 21:27:44.824: INFO: got output "2020-06-22T21:27:44.646558602Z I0622 21:27:44.646374 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/pdr8 540\n" STEP: restricting to a time range Jun 22 21:27:47.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127 --since=1s' Jun 22 21:27:47.442: INFO: stderr: "" Jun 22 21:27:47.442: INFO: stdout: "I0622 21:27:46.446378 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/74zk 480\nI0622 21:27:46.646392 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/kt9 332\nI0622 21:27:46.846429 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/hh4 305\nI0622 21:27:47.046415 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/ps2w 588\nI0622 21:27:47.246404 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/scz 583\n" Jun 22 21:27:47.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4127 --since=24h' Jun 22 21:27:47.543: INFO: stderr: "" Jun 22 21:27:47.543: INFO: stdout: "I0622 21:27:42.846222 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/99d 332\nI0622 21:27:43.046517 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/g4z 593\nI0622 21:27:43.246390 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/s8k8 555\nI0622 21:27:43.446410 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/c77b 375\nI0622 21:27:43.646459 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/45d 216\nI0622 21:27:43.846407 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bmg 295\nI0622 21:27:44.046460 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/dcr9 406\nI0622 21:27:44.246420 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6gzt 533\nI0622 21:27:44.446403 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/75qb 244\nI0622 21:27:44.646374 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/pdr8 540\nI0622 21:27:44.846370 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/tqgt 568\nI0622 21:27:45.046360 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/lf4 556\nI0622 21:27:45.246423 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/grll 209\nI0622 21:27:45.446419 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/zmv 543\nI0622 21:27:45.646390 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/rh58 229\nI0622 21:27:45.846415 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/xrv 401\nI0622 21:27:46.046398 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/cddm 401\nI0622 21:27:46.246407 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sb48 332\nI0622 21:27:46.446378 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/74zk 480\nI0622 21:27:46.646392 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/kt9 332\nI0622 21:27:46.846429 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/hh4 305\nI0622 21:27:47.046415 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/ps2w 588\nI0622 21:27:47.246404 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/scz 583\nI0622 21:27:47.446345 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/qpp 417\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jun 22 21:27:47.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4127' Jun 22 21:27:49.655: INFO: stderr: "" Jun 22 21:27:49.655: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:49.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4127" for this suite. • [SLOW TEST:9.645 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":63,"skipped":1091,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:49.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 22 21:27:54.239: INFO: Successfully updated pod "annotationupdate96f64fab-651b-4dba-943d-93c05f9e430c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:27:58.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4025" for this suite. • [SLOW TEST:8.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1114,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:27:58.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-33165af7-4948-45a2-b2e0-3c26df8a555d STEP: Creating a pod to test consume configMaps Jun 22 21:27:58.455: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138" in namespace "configmap-9210" to be "success or failure" Jun 22 21:27:58.472: INFO: Pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138": Phase="Pending", Reason="", readiness=false. Elapsed: 17.370203ms Jun 22 21:28:00.476: INFO: Pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021637758s Jun 22 21:28:02.481: INFO: Pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138": Phase="Running", Reason="", readiness=true. Elapsed: 4.026290636s Jun 22 21:28:04.486: INFO: Pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030930009s STEP: Saw pod success Jun 22 21:28:04.486: INFO: Pod "pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138" satisfied condition "success or failure" Jun 22 21:28:04.489: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138 container configmap-volume-test: STEP: delete the pod Jun 22 21:28:04.541: INFO: Waiting for pod pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138 to disappear Jun 22 21:28:04.546: INFO: Pod pod-configmaps-3f4dc355-2a10-4012-8ed6-ee9a95efa138 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:28:04.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9210" for this suite. • [SLOW TEST:6.229 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1116,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:28:04.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:28:04.703: INFO: Creating deployment "webserver-deployment" Jun 22 21:28:04.710: INFO: Waiting for observed generation 1 Jun 22 21:28:06.880: INFO: Waiting for all required pods to come up Jun 22 21:28:07.024: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 22 21:28:17.113: INFO: Waiting for deployment "webserver-deployment" to complete Jun 22 21:28:17.120: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 22 21:28:17.128: INFO: Updating deployment webserver-deployment Jun 22 21:28:17.128: INFO: Waiting for observed generation 2 Jun 22 21:28:19.216: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 22 21:28:19.408: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 22 21:28:19.587: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 22 21:28:19.779: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 22 21:28:19.779: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 22 21:28:19.931: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 22 21:28:19.936: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 22 21:28:19.936: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 22 21:28:19.941: INFO: Updating deployment webserver-deployment Jun 22 21:28:19.941: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 22 21:28:20.442: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 22 21:28:20.447: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 22 21:28:22.748: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8752 /apis/apps/v1/namespaces/deployment-8752/deployments/webserver-deployment 88ace742-6ed2-40f1-92ee-99d6db76997e 26481432 3 2020-06-22 21:28:04 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005210038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-22 21:28:20 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-06-22 21:28:20 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 22 21:28:22.898: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8752 /apis/apps/v1/namespaces/deployment-8752/replicasets/webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 26481428 3 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 88ace742-6ed2-40f1-92ee-99d6db76997e 0xc005210517 0xc005210518}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005210588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 21:28:22.898: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 22 21:28:22.899: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8752 /apis/apps/v1/namespaces/deployment-8752/replicasets/webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 26481417 3 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 88ace742-6ed2-40f1-92ee-99d6db76997e 0xc005210457 0xc005210458}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052104b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 22 21:28:22.933: INFO: Pod "webserver-deployment-595b5b9587-25j8d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-25j8d webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-25j8d bfd1a52c-35f6-48b4-bd4b-13112a374d16 26481268 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005210a37 0xc005210a38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.251,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46844e440460573963d38e38dabd51b3df9979f376c3af3ae69f55a671dccfba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.934: INFO: Pod "webserver-deployment-595b5b9587-6grzc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6grzc webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-6grzc 436d40a9-eb0b-429d-9a90-c7ec1aed886a 26481235 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005210bb7 0xc005210bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.248,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bbbc62d0756640955d2a46979a73227d3560eedb48ff027ffb50cd395ee04211,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.934: INFO: Pod "webserver-deployment-595b5b9587-6zv46" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6zv46 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-6zv46 acb55d25-9f77-46d5-b4c5-d26a513e89da 26481399 0 2020-06-22 21:28:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005210d37 0xc005210d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.935: INFO: Pod "webserver-deployment-595b5b9587-bd756" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bd756 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-bd756 2cec522b-221e-4910-9d82-fc912beb1925 26481249 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005210e97 0xc005210e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.249,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6dcca25c7888fa856ec280792e2aacb22527117f789c59dccb00a79e52f4260a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.935: INFO: Pod "webserver-deployment-595b5b9587-bkjj9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bkjj9 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-bkjj9 f44538d1-b987-49df-8c69-eaaa76f63a8f 26481214 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211017 0xc005211018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.247,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da74f4d7b015299e642de5b01c3cd6be9382ec3c9e6219a0567db19c690f5d28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.936: INFO: Pod "webserver-deployment-595b5b9587-drlvp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-drlvp webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-drlvp 2b84c2f8-66f4-40e0-a86e-e964a43bd4c9 26481456 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211197 0xc005211198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.936: INFO: Pod "webserver-deployment-595b5b9587-g9l64" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g9l64 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-g9l64 dd5ca3d2-b56a-4723-9168-5f1f58496deb 26481271 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0052112f7 0xc0052112f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.250,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3c0e64a9c749451abeda12abc156e1e56ccf27efc95d36fb005e452ca9b811d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.936: INFO: Pod "webserver-deployment-595b5b9587-hfsw9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hfsw9 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-hfsw9 667929c1-bfb9-4c91-b507-e0fbae5f2f28 26481470 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211477 0xc005211478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.937: INFO: Pod "webserver-deployment-595b5b9587-hplln" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hplln webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-hplln ce44e127-f1cf-48c8-9005-3b59f66ab9a8 26481484 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0052115d7 0xc0052115d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.937: INFO: Pod "webserver-deployment-595b5b9587-jk7rh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jk7rh webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-jk7rh c51f9c36-6037-4e5d-8ad0-23f7954d5b70 26481478 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211737 0xc005211738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.937: INFO: Pod "webserver-deployment-595b5b9587-kxqh7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kxqh7 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-kxqh7 17e93f9b-d034-4e27-b356-ef611f8b34c8 26481230 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211897 0xc005211898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.161,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f19d1cb9692f9d5ee82063db9861318ed82c34801695adf0190fe92155456022,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.938: INFO: Pod "webserver-deployment-595b5b9587-mdqbp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mdqbp webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-mdqbp e3b9fffd-ff22-43e0-b236-c126daf86f2a 26481452 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211a17 0xc005211a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.938: INFO: Pod "webserver-deployment-595b5b9587-msclf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-msclf webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-msclf 094ccb4b-b19d-40e7-a828-9958018716da 26481194 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211b77 0xc005211b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.160,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9389ec35b4cf7ce39b06d2a472815dd6b36e9330adda58d5eaabfb9c2f238d66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.938: INFO: Pod "webserver-deployment-595b5b9587-pqtgf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pqtgf webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-pqtgf 533549cc-8fbc-4913-9709-ad8a109a7b9b 26481442 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211cf7 0xc005211cf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.939: INFO: Pod "webserver-deployment-595b5b9587-q4tk5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q4tk5 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-q4tk5 f57a028f-a84f-4547-b4f1-50d433019086 26481433 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211e57 0xc005211e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.939: INFO: Pod "webserver-deployment-595b5b9587-qrq7h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qrq7h webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-qrq7h c139dbe2-451f-4d75-812b-b7575fa3fa04 26481448 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc005211fb7 0xc005211fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.939: INFO: Pod "webserver-deployment-595b5b9587-rkj4r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rkj4r webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-rkj4r 4374a46e-ac14-49a0-9536-2bbcbcb59d5a 26481224 0 2020-06-22 21:28:04 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0051f0117 0xc0051f0118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.162,StartTime:2020-06-22 21:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:28:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a7803ca14f92cb2d31ee115beb674272023f73f21c6670131fe3a25e95aec90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.939: INFO: Pod "webserver-deployment-595b5b9587-snq5b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-snq5b webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-snq5b 5874852d-ad77-442b-9e3f-8072e297acc3 26481439 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0051f0297 0xc0051f0298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.939: INFO: Pod "webserver-deployment-595b5b9587-v5nf2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v5nf2 webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-v5nf2 a2d2e83d-e687-4f39-ad6c-eddbdc2af86b 26481476 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0051f03f7 0xc0051f03f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.940: INFO: Pod "webserver-deployment-595b5b9587-w65cr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w65cr webserver-deployment-595b5b9587- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-595b5b9587-w65cr d2fa3466-a667-4d87-a9a2-0e7038d64223 26481486 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e61ac231-2e24-4dc9-a4e1-ca6457878754 0xc0051f0557 0xc0051f0558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.940: INFO: Pod "webserver-deployment-c7997dcc8-26dlc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-26dlc webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-26dlc 30ec59c7-79e7-420a-b43a-9e9a6acee1f7 26481435 0 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f06d7 0xc0051f06d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.252,StartTime:2020-06-22 21:28:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.940: INFO: Pod "webserver-deployment-c7997dcc8-6hhmk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6hhmk webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-6hhmk be87fb34-5830-4e58-ac8e-324959cc6813 26481485 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f0897 0xc0051f0898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.940: INFO: Pod "webserver-deployment-c7997dcc8-8x2rb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8x2rb webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-8x2rb 3b2423e9-09a9-4564-8886-b884b9735553 26481323 0 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f0a27 0xc0051f0a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.940: INFO: Pod "webserver-deployment-c7997dcc8-cfz8m" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cfz8m webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-cfz8m b19e74c4-c370-4aa9-8732-debc946c7377 26481333 0 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f0ba7 0xc0051f0ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.941: INFO: Pod "webserver-deployment-c7997dcc8-cnvtt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cnvtt webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-cnvtt 2502e307-c7ea-4391-91d2-fcae2eadf6ce 26481334 0 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f0d37 0xc0051f0d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.941: INFO: Pod "webserver-deployment-c7997dcc8-jljjw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jljjw webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-jljjw 87d7785e-4604-4aa4-8142-a1e22d6ab5fb 26481443 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f0eb7 0xc0051f0eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.941: INFO: Pod "webserver-deployment-c7997dcc8-kc9qx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kc9qx webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-kc9qx cf57dd29-2f36-49de-b830-9a6df95b12e7 26481447 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f1037 0xc0051f1038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.941: INFO: Pod "webserver-deployment-c7997dcc8-ktx6h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ktx6h webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-ktx6h 46072b36-902c-473f-ae1b-4e31877ca11c 26481429 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f11c7 0xc0051f11c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.942: INFO: Pod "webserver-deployment-c7997dcc8-ljq9p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ljq9p webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-ljq9p e5cb3f74-b11c-4bbb-bcb2-73cc42f9aa03 26481301 0 2020-06-22 21:28:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f1347 0xc0051f1348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.942: INFO: Pod "webserver-deployment-c7997dcc8-nncsp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nncsp webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-nncsp 1e6734d1-64b2-4e2d-9f29-c98e40e2bc75 26481483 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f14c7 0xc0051f14c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.942: INFO: Pod "webserver-deployment-c7997dcc8-nq82k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nq82k webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-nq82k 7974320b-7b74-4769-94fc-6da317e26939 26481453 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f1647 0xc0051f1648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.942: INFO: Pod "webserver-deployment-c7997dcc8-q6x6r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q6x6r webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-q6x6r 8e46bfac-f0ea-4868-915c-d54c9b040db8 26481426 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f17c7 0xc0051f17c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 21:28:22.943: INFO: Pod "webserver-deployment-c7997dcc8-v68s8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v68s8 webserver-deployment-c7997dcc8- deployment-8752 /api/v1/namespaces/deployment-8752/pods/webserver-deployment-c7997dcc8-v68s8 b4b5f1c4-b647-4f18-a348-e871ab087079 26481471 0 2020-06-22 21:28:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 78609cc2-8fa1-4173-88a4-9d9dbc71f417 0xc0051f18f7 0xc0051f18f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjrkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjrkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjrkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:28:22.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8752" for this suite. • [SLOW TEST:18.899 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":66,"skipped":1120,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:28:23.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:04.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8894" for this suite. STEP: Destroying namespace "nsdeletetest-4464" for this suite. Jun 22 21:29:04.955: INFO: Namespace nsdeletetest-4464 was already deleted STEP: Destroying namespace "nsdeletetest-763" for this suite. • [SLOW TEST:41.506 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":67,"skipped":1128,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:04.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 21:29:05.023: INFO: Waiting up to 5m0s for pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8" in namespace "emptydir-8867" to be "success or failure" Jun 22 21:29:05.027: INFO: Pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99265ms Jun 22 21:29:07.032: INFO: Pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008372951s Jun 22 21:29:09.036: INFO: Pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012684708s Jun 22 21:29:11.041: INFO: Pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017075844s STEP: Saw pod success Jun 22 21:29:11.041: INFO: Pod "pod-4e607a85-97d0-4b89-946f-afcfd389c3d8" satisfied condition "success or failure" Jun 22 21:29:11.044: INFO: Trying to get logs from node jerma-worker2 pod pod-4e607a85-97d0-4b89-946f-afcfd389c3d8 container test-container: STEP: delete the pod Jun 22 21:29:11.078: INFO: Waiting for pod pod-4e607a85-97d0-4b89-946f-afcfd389c3d8 to disappear Jun 22 21:29:11.100: INFO: Pod pod-4e607a85-97d0-4b89-946f-afcfd389c3d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:11.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8867" for this suite. • [SLOW TEST:6.148 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1149,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:11.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:29:11.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:29:13.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458151, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458151, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458151, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458151, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:29:16.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:16.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7610" for this suite. STEP: Destroying namespace "webhook-7610-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.760 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":69,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:16.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:29:16.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-390' Jun 22 21:29:17.022: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 21:29:17.022: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Jun 22 21:29:19.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-390' Jun 22 21:29:19.310: INFO: stderr: "" Jun 22 21:29:19.310: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:19.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-390" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":70,"skipped":1191,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:19.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 22 21:29:19.918: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 22 21:29:30.535: INFO: >>> kubeConfig: /root/.kube/config Jun 22 21:29:33.496: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:42.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-151" for this suite. • [SLOW TEST:23.558 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":71,"skipped":1197,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:42.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:29:43.065: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 22 21:29:45.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9734 create -f -' Jun 22 21:29:49.053: INFO: stderr: "" Jun 22 21:29:49.053: INFO: stdout: "e2e-test-crd-publish-openapi-3335-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 22 21:29:49.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9734 delete e2e-test-crd-publish-openapi-3335-crds test-cr' Jun 22 21:29:49.203: INFO: stderr: "" Jun 22 21:29:49.203: INFO: stdout: "e2e-test-crd-publish-openapi-3335-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 22 21:29:49.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9734 apply -f -' Jun 22 21:29:50.043: INFO: stderr: "" Jun 22 21:29:50.044: INFO: stdout: "e2e-test-crd-publish-openapi-3335-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 22 21:29:50.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9734 delete e2e-test-crd-publish-openapi-3335-crds test-cr' Jun 22 21:29:50.167: INFO: stderr: "" Jun 22 21:29:50.167: INFO: stdout: "e2e-test-crd-publish-openapi-3335-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 22 21:29:50.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3335-crds' Jun 22 21:29:50.417: INFO: stderr: "" Jun 22 21:29:50.417: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3335-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:53.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9734" for this suite. • [SLOW TEST:10.324 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":72,"skipped":1197,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:53.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:29:57.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7927" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1214,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:29:57.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:29:58.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:30:00.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458198, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458198, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458198, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458198, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:30:03.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:03.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7852" for this suite. STEP: Destroying namespace "webhook-7852-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":74,"skipped":1216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:03.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 21:30:03.899: INFO: Waiting up to 5m0s for pod "pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4" in namespace "emptydir-3987" to be "success or failure" Jun 22 21:30:03.903: INFO: Pod "pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483421ms Jun 22 21:30:05.986: INFO: Pod "pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087742925s Jun 22 21:30:07.991: INFO: Pod "pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092035908s STEP: Saw pod success Jun 22 21:30:07.991: INFO: Pod "pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4" satisfied condition "success or failure" Jun 22 21:30:07.994: INFO: Trying to get logs from node jerma-worker pod pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4 container test-container: STEP: delete the pod Jun 22 21:30:08.015: INFO: Waiting for pod pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4 to disappear Jun 22 21:30:08.026: INFO: Pod pod-cde6b030-7861-4e99-9e9f-3576aa38e0e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:08.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3987" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1244,"failed":0} ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:08.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8417 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8417 STEP: Deleting pre-stop pod Jun 22 21:30:21.266: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8417" for this suite. • [SLOW TEST:13.257 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":76,"skipped":1244,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:21.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 21:30:21.367: INFO: Waiting up to 5m0s for pod "pod-99e26bd3-a0f9-452c-8cef-05f0154cde63" in namespace "emptydir-1371" to be "success or failure" Jun 22 21:30:21.371: INFO: Pod "pod-99e26bd3-a0f9-452c-8cef-05f0154cde63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447659ms Jun 22 21:30:23.374: INFO: Pod "pod-99e26bd3-a0f9-452c-8cef-05f0154cde63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007032027s Jun 22 21:30:25.378: INFO: Pod "pod-99e26bd3-a0f9-452c-8cef-05f0154cde63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011130939s STEP: Saw pod success Jun 22 21:30:25.379: INFO: Pod "pod-99e26bd3-a0f9-452c-8cef-05f0154cde63" satisfied condition "success or failure" Jun 22 21:30:25.381: INFO: Trying to get logs from node jerma-worker2 pod pod-99e26bd3-a0f9-452c-8cef-05f0154cde63 container test-container: STEP: delete the pod Jun 22 21:30:25.463: INFO: Waiting for pod pod-99e26bd3-a0f9-452c-8cef-05f0154cde63 to disappear Jun 22 21:30:25.491: INFO: Pod pod-99e26bd3-a0f9-452c-8cef-05f0154cde63 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:25.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1371" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1256,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:25.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:30:25.553: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 22 21:30:28.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1482 create -f -' Jun 22 21:30:31.818: INFO: stderr: "" Jun 22 21:30:31.818: INFO: stdout: "e2e-test-crd-publish-openapi-9253-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 22 21:30:31.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1482 delete e2e-test-crd-publish-openapi-9253-crds test-cr' Jun 22 21:30:31.945: INFO: stderr: "" Jun 22 21:30:31.945: INFO: stdout: "e2e-test-crd-publish-openapi-9253-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 22 21:30:31.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1482 apply -f -' Jun 22 21:30:32.794: INFO: stderr: "" Jun 22 21:30:32.794: INFO: stdout: "e2e-test-crd-publish-openapi-9253-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 22 21:30:32.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1482 delete e2e-test-crd-publish-openapi-9253-crds test-cr' Jun 22 21:30:33.606: INFO: stderr: "" Jun 22 21:30:33.606: INFO: stdout: "e2e-test-crd-publish-openapi-9253-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 22 21:30:33.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9253-crds' Jun 22 21:30:33.854: INFO: stderr: "" Jun 22 21:30:33.854: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9253-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:35.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1482" for this suite. • [SLOW TEST:10.231 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":78,"skipped":1257,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:35.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f04c156a-d432-4f09-863d-37acbb14a52e STEP: Creating a pod to test consume secrets Jun 22 21:30:35.821: INFO: Waiting up to 5m0s for pod "pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15" in namespace "secrets-2914" to be "success or failure" Jun 22 21:30:35.826: INFO: Pod "pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612429ms Jun 22 21:30:37.829: INFO: Pod "pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007671001s Jun 22 21:30:39.832: INFO: Pod "pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011256231s STEP: Saw pod success Jun 22 21:30:39.832: INFO: Pod "pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15" satisfied condition "success or failure" Jun 22 21:30:39.835: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15 container secret-volume-test: STEP: delete the pod Jun 22 21:30:39.990: INFO: Waiting for pod pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15 to disappear Jun 22 21:30:40.011: INFO: Pod pod-secrets-c90f4f03-71fa-45c4-812d-99ab9557ac15 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:40.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2914" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1276,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:40.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-26f39aad-d721-4c68-9621-466fdc44cb19 STEP: Creating a pod to test consume secrets Jun 22 21:30:40.157: INFO: Waiting up to 5m0s for pod "pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69" in namespace "secrets-7539" to be "success or failure" Jun 22 21:30:40.192: INFO: Pod "pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69": Phase="Pending", Reason="", readiness=false. Elapsed: 34.826317ms Jun 22 21:30:42.196: INFO: Pod "pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039276788s Jun 22 21:30:44.201: INFO: Pod "pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043955695s STEP: Saw pod success Jun 22 21:30:44.201: INFO: Pod "pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69" satisfied condition "success or failure" Jun 22 21:30:44.204: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69 container secret-volume-test: STEP: delete the pod Jun 22 21:30:44.277: INFO: Waiting for pod pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69 to disappear Jun 22 21:30:44.306: INFO: Pod pod-secrets-f4be8b54-71f5-4a8e-84f3-e7cb520c0d69 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7539" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:44.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:30:44.433: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8" in namespace "security-context-test-2195" to be "success or failure" Jun 22 21:30:44.449: INFO: Pod "busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.561057ms Jun 22 21:30:46.539: INFO: Pod "busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106333616s Jun 22 21:30:48.544: INFO: Pod "busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111172079s Jun 22 21:30:48.544: INFO: Pod "busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:30:48.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2195" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:30:48.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 22 21:30:48.625: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 21:30:48.672: INFO: Waiting for terminating namespaces to be deleted... Jun 22 21:30:48.678: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 22 21:30:48.683: INFO: busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8 from security-context-test-2195 started at 2020-06-22 21:30:44 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.683: INFO: Container busybox-user-65534-c5a98164-2e9a-42f0-b97d-953d1c8fbba8 ready: false, restart count 0 Jun 22 21:30:48.683: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.683: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:30:48.683: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.683: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 21:30:48.683: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 22 21:30:48.688: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.688: INFO: Container kube-hunter ready: false, restart count 0 Jun 22 21:30:48.688: INFO: tester from prestop-8417 started at 2020-06-22 21:30:12 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.688: INFO: Container tester ready: false, restart count 0 Jun 22 21:30:48.688: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.688: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 21:30:48.688: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.688: INFO: Container kube-bench ready: false, restart count 0 Jun 22 21:30:48.688: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 21:30:48.688: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ed6bb3c8-a034-4944-91b7-77a907ff53e2 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-ed6bb3c8-a034-4944-91b7-77a907ff53e2 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ed6bb3c8-a034-4944-91b7-77a907ff53e2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:35:56.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5994" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.438 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":82,"skipped":1341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:35:56.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:35:57.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10" in namespace "downward-api-8912" to be "success or failure" Jun 22 21:35:57.064: INFO: Pod "downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10": Phase="Pending", Reason="", readiness=false. Elapsed: 3.415504ms Jun 22 21:35:59.069: INFO: Pod "downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00815601s Jun 22 21:36:01.074: INFO: Pod "downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012955494s STEP: Saw pod success Jun 22 21:36:01.074: INFO: Pod "downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10" satisfied condition "success or failure" Jun 22 21:36:01.078: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10 container client-container: STEP: delete the pod Jun 22 21:36:01.128: INFO: Waiting for pod downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10 to disappear Jun 22 21:36:01.137: INFO: Pod downwardapi-volume-0fbe1899-1853-4490-aa19-0dcd69b90f10 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:36:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8912" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1369,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:36:01.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 21:36:01.223: INFO: Waiting up to 5m0s for pod "pod-cd303a66-bf7b-4792-bfbf-986b4e412d16" in namespace "emptydir-4034" to be "success or failure" Jun 22 21:36:01.226: INFO: Pod "pod-cd303a66-bf7b-4792-bfbf-986b4e412d16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.316356ms Jun 22 21:36:03.286: INFO: Pod "pod-cd303a66-bf7b-4792-bfbf-986b4e412d16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062856145s Jun 22 21:36:05.290: INFO: Pod "pod-cd303a66-bf7b-4792-bfbf-986b4e412d16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067167053s STEP: Saw pod success Jun 22 21:36:05.290: INFO: Pod "pod-cd303a66-bf7b-4792-bfbf-986b4e412d16" satisfied condition "success or failure" Jun 22 21:36:05.293: INFO: Trying to get logs from node jerma-worker2 pod pod-cd303a66-bf7b-4792-bfbf-986b4e412d16 container test-container: STEP: delete the pod Jun 22 21:36:05.342: INFO: Waiting for pod pod-cd303a66-bf7b-4792-bfbf-986b4e412d16 to disappear Jun 22 21:36:05.348: INFO: Pod pod-cd303a66-bf7b-4792-bfbf-986b4e412d16 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:36:05.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4034" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1370,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:36:05.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:36:12.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9226" for this suite. STEP: Destroying namespace "nsdeletetest-542" for this suite. Jun 22 21:36:12.084: INFO: Namespace nsdeletetest-542 was already deleted STEP: Destroying namespace "nsdeletetest-6912" for this suite. • [SLOW TEST:6.726 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":85,"skipped":1370,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:36:12.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:36:12.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6" in namespace "projected-1271" to be "success or failure" Jun 22 21:36:12.262: INFO: Pod "downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.443922ms Jun 22 21:36:14.280: INFO: Pod "downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067077597s Jun 22 21:36:16.284: INFO: Pod "downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07082902s STEP: Saw pod success Jun 22 21:36:16.284: INFO: Pod "downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6" satisfied condition "success or failure" Jun 22 21:36:16.286: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6 container client-container: STEP: delete the pod Jun 22 21:36:16.320: INFO: Waiting for pod downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6 to disappear Jun 22 21:36:16.355: INFO: Pod downwardapi-volume-5ab0737a-f11d-44db-b229-a4add28a7cb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:36:16.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1271" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:36:16.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 22 21:36:16.453: INFO: PodSpec: initContainers in spec.initContainers Jun 22 21:37:06.422: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-53170829-f0b3-4027-b6f8-0ef82eb7e2e5", GenerateName:"", Namespace:"init-container-4053", SelfLink:"/api/v1/namespaces/init-container-4053/pods/pod-init-53170829-f0b3-4027-b6f8-0ef82eb7e2e5", UID:"c3324bab-8865-4689-af06-a7093a039870", ResourceVersion:"26483939", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728458576, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"453394803"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nt9q8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a7e100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nt9q8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nt9q8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nt9q8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029863a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002110180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002986450)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002986470)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002986478), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00298647c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458576, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458576, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458576, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458576, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.187", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.187"}}, StartTime:(*v1.Time)(0xc0016779a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002964150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029641c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3820f29a6e67306ef3a73cf7e714ce99fba8388e7227c063f571055ea2b65ffa", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001677ee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001677e80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0029864ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:37:06.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4053" for this suite. • [SLOW TEST:50.093 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":87,"skipped":1401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:37:06.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 21:37:14.641: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 21:37:14.660: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 21:37:16.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 21:37:16.666: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 21:37:18.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 21:37:18.665: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 21:37:20.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 21:37:20.665: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:37:20.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-347" for this suite. • [SLOW TEST:14.220 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1461,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:37:20.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-0cc6c470-e7c5-4177-bbaf-e53013bad370 STEP: Creating a pod to test consume configMaps Jun 22 21:37:20.778: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33" in namespace "configmap-3738" to be "success or failure" Jun 22 21:37:20.781: INFO: Pod "pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061787ms Jun 22 21:37:22.785: INFO: Pod "pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007152095s Jun 22 21:37:24.789: INFO: Pod "pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010921503s STEP: Saw pod success Jun 22 21:37:24.789: INFO: Pod "pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33" satisfied condition "success or failure" Jun 22 21:37:24.792: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33 container configmap-volume-test: STEP: delete the pod Jun 22 21:37:24.826: INFO: Waiting for pod pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33 to disappear Jun 22 21:37:24.835: INFO: Pod pod-configmaps-ef8d1032-c496-4bc3-aad9-b8e41b02dd33 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:37:24.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3738" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1462,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:37:24.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 21:37:32.974: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 21:37:33.018: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 21:37:35.018: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 21:37:35.023: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 21:37:37.018: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 21:37:37.023: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 21:37:39.018: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 21:37:39.023: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 21:37:41.018: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 21:37:41.023: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:37:41.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-817" for this suite. • [SLOW TEST:16.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1468,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:37:41.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:37:41.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:37:43.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458661, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458661, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458661, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458661, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:37:46.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:37:46.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:37:47.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1559" for this suite. STEP: Destroying namespace "webhook-1559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.807 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":91,"skipped":1474,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:37:47.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4926 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4926 Jun 22 21:37:47.946: INFO: Found 0 stateful pods, waiting for 1 Jun 22 21:37:57.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 21:37:57.972: INFO: Deleting all statefulset in ns statefulset-4926 Jun 22 21:37:58.060: INFO: Scaling statefulset ss to 0 Jun 22 21:38:18.108: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:38:18.111: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:18.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4926" for this suite. • [SLOW TEST:30.319 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":92,"skipped":1486,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:18.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-540496ec-bc0c-4fa2-a44a-d5995f640013 STEP: Creating a pod to test consume configMaps Jun 22 21:38:18.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c" in namespace "configmap-261" to be "success or failure" Jun 22 21:38:18.232: INFO: Pod "pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026505ms Jun 22 21:38:20.237: INFO: Pod "pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009124264s Jun 22 21:38:22.242: INFO: Pod "pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013259207s STEP: Saw pod success Jun 22 21:38:22.242: INFO: Pod "pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c" satisfied condition "success or failure" Jun 22 21:38:22.244: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c container configmap-volume-test: STEP: delete the pod Jun 22 21:38:22.264: INFO: Waiting for pod pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c to disappear Jun 22 21:38:22.268: INFO: Pod pod-configmaps-b871d9c2-2281-40a3-a30d-5047c6f5af5c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:22.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-261" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1498,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:22.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:38:22.350: INFO: Creating deployment "test-recreate-deployment" Jun 22 21:38:22.358: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 22 21:38:22.396: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 22 21:38:24.403: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 22 21:38:24.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 21:38:26.410: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 22 21:38:26.416: INFO: Updating deployment test-recreate-deployment Jun 22 21:38:26.416: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 22 21:38:26.933: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-194 /apis/apps/v1/namespaces/deployment-194/deployments/test-recreate-deployment f8cd7d31-68d4-42ce-8cb6-112a77278311 26484513 2 2020-06-22 21:38:22 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028b9d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-22 21:38:26 +0000 UTC,LastTransitionTime:2020-06-22 21:38:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-06-22 21:38:26 +0000 UTC,LastTransitionTime:2020-06-22 21:38:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 22 21:38:26.962: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-194 /apis/apps/v1/namespaces/deployment-194/replicasets/test-recreate-deployment-5f94c574ff 9335ca83-2e5b-4eea-b25f-de23c220d87a 26484511 1 2020-06-22 21:38:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f8cd7d31-68d4-42ce-8cb6-112a77278311 0xc0052be0c7 0xc0052be0c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052be128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 21:38:26.962: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 22 21:38:26.962: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-194 /apis/apps/v1/namespaces/deployment-194/replicasets/test-recreate-deployment-799c574856 5aa0cae1-d39e-4ef3-808a-8bbef8b640cc 26484502 2 2020-06-22 21:38:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f8cd7d31-68d4-42ce-8cb6-112a77278311 0xc0052be197 0xc0052be198}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052be208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 21:38:26.966: INFO: Pod "test-recreate-deployment-5f94c574ff-mvwmj" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-mvwmj test-recreate-deployment-5f94c574ff- deployment-194 /api/v1/namespaces/deployment-194/pods/test-recreate-deployment-5f94c574ff-mvwmj c1c02dc2-9b8d-4536-be7c-b56c850b9d0a 26484514 0 2020-06-22 21:38:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 9335ca83-2e5b-4eea-b25f-de23c220d87a 0xc0052be657 0xc0052be658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5rl8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5rl8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5rl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:38:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:38:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:38:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:38:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-22 21:38:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:26.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-194" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":94,"skipped":1504,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:26.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-51d52426-5488-43aa-a6c1-920783ddaf0a STEP: Creating a pod to test consume secrets Jun 22 21:38:27.185: INFO: Waiting up to 5m0s for pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f" in namespace "secrets-266" to be "success or failure" Jun 22 21:38:27.195: INFO: Pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126476ms Jun 22 21:38:29.207: INFO: Pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022009876s Jun 22 21:38:31.212: INFO: Pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f": Phase="Running", Reason="", readiness=true. Elapsed: 4.02733866s Jun 22 21:38:33.217: INFO: Pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031507558s STEP: Saw pod success Jun 22 21:38:33.217: INFO: Pod "pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f" satisfied condition "success or failure" Jun 22 21:38:33.219: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f container secret-volume-test: STEP: delete the pod Jun 22 21:38:33.234: INFO: Waiting for pod pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f to disappear Jun 22 21:38:33.239: INFO: Pod pod-secrets-f09ed8cd-a6f2-436e-b81f-add4d0b4ee2f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-266" for this suite. • [SLOW TEST:6.270 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:33.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:37.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-658" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-1600ad2e-086c-4823-9f9e-99dbd87f32a8 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-603" for this suite. • [SLOW TEST:6.157 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:43.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:38:44.295: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:38:46.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458724, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458724, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458724, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458724, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:38:49.343: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:49.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6304" for this suite. STEP: Destroying namespace "webhook-6304-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.323 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":98,"skipped":1613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:49.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:38:49.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6" in namespace "projected-2787" to be "success or failure" Jun 22 21:38:49.946: INFO: Pod "downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086397ms Jun 22 21:38:51.951: INFO: Pod "downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00853643s Jun 22 21:38:53.982: INFO: Pod "downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040310725s STEP: Saw pod success Jun 22 21:38:53.982: INFO: Pod "downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6" satisfied condition "success or failure" Jun 22 21:38:53.986: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6 container client-container: STEP: delete the pod Jun 22 21:38:54.018: INFO: Waiting for pod downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6 to disappear Jun 22 21:38:54.022: INFO: Pod downwardapi-volume-8f84c2a1-2574-4a64-a9c6-7a6a17358db6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:38:54.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2787" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1649,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:38:54.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8405 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8405 STEP: creating replication controller externalsvc in namespace services-8405 I0622 21:38:54.319411 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8405, replica count: 2 I0622 21:38:57.370050 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:39:00.370319 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 22 21:39:00.420: INFO: Creating new exec pod Jun 22 21:39:04.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8405 execpodrhqf4 -- /bin/sh -x -c nslookup nodeport-service' Jun 22 21:39:04.896: INFO: stderr: "I0622 21:39:04.596158 1197 log.go:172] (0xc000a7b3f0) (0xc000bb65a0) Create stream\nI0622 21:39:04.596228 1197 log.go:172] (0xc000a7b3f0) (0xc000bb65a0) Stream added, broadcasting: 1\nI0622 21:39:04.601620 1197 log.go:172] (0xc000a7b3f0) Reply frame received for 1\nI0622 21:39:04.601680 1197 log.go:172] (0xc000a7b3f0) (0xc000739b80) Create stream\nI0622 21:39:04.601703 1197 log.go:172] (0xc000a7b3f0) (0xc000739b80) Stream added, broadcasting: 3\nI0622 21:39:04.602918 1197 log.go:172] (0xc000a7b3f0) Reply frame received for 3\nI0622 21:39:04.602952 1197 log.go:172] (0xc000a7b3f0) (0xc000739c20) Create stream\nI0622 21:39:04.602967 1197 log.go:172] (0xc000a7b3f0) (0xc000739c20) Stream added, broadcasting: 5\nI0622 21:39:04.604027 1197 log.go:172] (0xc000a7b3f0) Reply frame received for 5\nI0622 21:39:04.724727 1197 log.go:172] (0xc000a7b3f0) Data frame received for 5\nI0622 21:39:04.724758 1197 log.go:172] (0xc000739c20) (5) Data frame handling\nI0622 21:39:04.724775 1197 log.go:172] (0xc000739c20) (5) Data frame sent\n+ nslookup nodeport-service\nI0622 21:39:04.888384 1197 log.go:172] (0xc000a7b3f0) Data frame received for 3\nI0622 21:39:04.888420 1197 log.go:172] (0xc000739b80) (3) Data frame handling\nI0622 21:39:04.888448 1197 log.go:172] (0xc000739b80) (3) Data frame sent\nI0622 21:39:04.889051 1197 log.go:172] (0xc000a7b3f0) Data frame received for 3\nI0622 21:39:04.889069 1197 log.go:172] (0xc000739b80) (3) Data frame handling\nI0622 21:39:04.889089 1197 log.go:172] (0xc000739b80) (3) Data frame sent\nI0622 21:39:04.890008 1197 log.go:172] (0xc000a7b3f0) Data frame received for 5\nI0622 21:39:04.890037 1197 log.go:172] (0xc000739c20) (5) Data frame handling\nI0622 21:39:04.890095 1197 log.go:172] (0xc000a7b3f0) Data frame received for 3\nI0622 21:39:04.890152 1197 log.go:172] (0xc000739b80) (3) Data frame handling\nI0622 21:39:04.891776 1197 log.go:172] (0xc000a7b3f0) Data frame received for 1\nI0622 21:39:04.891902 1197 log.go:172] (0xc000bb65a0) (1) Data frame handling\nI0622 21:39:04.892024 1197 log.go:172] (0xc000bb65a0) (1) Data frame sent\nI0622 21:39:04.892121 1197 log.go:172] (0xc000a7b3f0) (0xc000bb65a0) Stream removed, broadcasting: 1\nI0622 21:39:04.892161 1197 log.go:172] (0xc000a7b3f0) Go away received\nI0622 21:39:04.892540 1197 log.go:172] (0xc000a7b3f0) (0xc000bb65a0) Stream removed, broadcasting: 1\nI0622 21:39:04.892559 1197 log.go:172] (0xc000a7b3f0) (0xc000739b80) Stream removed, broadcasting: 3\nI0622 21:39:04.892570 1197 log.go:172] (0xc000a7b3f0) (0xc000739c20) Stream removed, broadcasting: 5\n" Jun 22 21:39:04.896: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8405.svc.cluster.local\tcanonical name = externalsvc.services-8405.svc.cluster.local.\nName:\texternalsvc.services-8405.svc.cluster.local\nAddress: 10.99.100.151\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8405, will wait for the garbage collector to delete the pods Jun 22 21:39:04.968: INFO: Deleting ReplicationController externalsvc took: 6.661873ms Jun 22 21:39:05.368: INFO: Terminating ReplicationController externalsvc pods took: 400.227443ms Jun 22 21:39:09.793: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8405" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.846 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":100,"skipped":1663,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:09.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:15.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-78" for this suite. • [SLOW TEST:5.163 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":101,"skipped":1663,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:15.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:31.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6728" for this suite. • [SLOW TEST:16.109 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":102,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:31.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-59145ed4-b3ca-4cf6-b962-859be92de39e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:31.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5729" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":103,"skipped":1706,"failed":0} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:31.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 22 21:39:31.285: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 22 21:39:40.342: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:40.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7355" for this suite. • [SLOW TEST:9.136 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1707,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:40.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 21:39:40.475: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:40.504: INFO: Number of nodes with available pods: 0 Jun 22 21:39:40.504: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:41.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:41.513: INFO: Number of nodes with available pods: 0 Jun 22 21:39:41.513: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:42.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:42.544: INFO: Number of nodes with available pods: 0 Jun 22 21:39:42.544: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:43.559: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:43.575: INFO: Number of nodes with available pods: 0 Jun 22 21:39:43.575: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:44.510: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:44.514: INFO: Number of nodes with available pods: 1 Jun 22 21:39:44.514: INFO: Node jerma-worker2 is running more than one daemon pod Jun 22 21:39:45.508: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:45.511: INFO: Number of nodes with available pods: 2 Jun 22 21:39:45.511: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 22 21:39:45.550: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:45.595: INFO: Number of nodes with available pods: 1 Jun 22 21:39:45.595: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:46.599: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:46.601: INFO: Number of nodes with available pods: 1 Jun 22 21:39:46.602: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:47.598: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:47.601: INFO: Number of nodes with available pods: 1 Jun 22 21:39:47.601: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:48.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:48.614: INFO: Number of nodes with available pods: 1 Jun 22 21:39:48.614: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:39:49.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:39:49.605: INFO: Number of nodes with available pods: 2 Jun 22 21:39:49.605: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4145, will wait for the garbage collector to delete the pods Jun 22 21:39:49.668: INFO: Deleting DaemonSet.extensions daemon-set took: 6.182894ms Jun 22 21:39:49.768: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.250649ms Jun 22 21:39:59.276: INFO: Number of nodes with available pods: 0 Jun 22 21:39:59.276: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 21:39:59.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4145/daemonsets","resourceVersion":"26485320"},"items":null} Jun 22 21:39:59.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4145/pods","resourceVersion":"26485320"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:39:59.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4145" for this suite. • [SLOW TEST:18.944 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":105,"skipped":1710,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:39:59.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 22 21:40:00.091: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 22 21:40:02.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:40:05.134: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:40:05.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:06.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1265" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.132 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":106,"skipped":1713,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:06.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5059 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 21:40:06.482: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 21:40:26.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.204:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 21:40:26.636: INFO: >>> kubeConfig: /root/.kube/config I0622 21:40:26.671069 6 log.go:172] (0xc0026c00b0) (0xc00222b720) Create stream I0622 21:40:26.671112 6 log.go:172] (0xc0026c00b0) (0xc00222b720) Stream added, broadcasting: 1 I0622 21:40:26.673981 6 log.go:172] (0xc0026c00b0) Reply frame received for 1 I0622 21:40:26.674022 6 log.go:172] (0xc0026c00b0) (0xc0022980a0) Create stream I0622 21:40:26.674037 6 log.go:172] (0xc0026c00b0) (0xc0022980a0) Stream added, broadcasting: 3 I0622 21:40:26.675306 6 log.go:172] (0xc0026c00b0) Reply frame received for 3 I0622 21:40:26.675361 6 log.go:172] (0xc0026c00b0) (0xc00222b7c0) Create stream I0622 21:40:26.675382 6 log.go:172] (0xc0026c00b0) (0xc00222b7c0) Stream added, broadcasting: 5 I0622 21:40:26.676346 6 log.go:172] (0xc0026c00b0) Reply frame received for 5 I0622 21:40:26.840095 6 log.go:172] (0xc0026c00b0) Data frame received for 3 I0622 21:40:26.840117 6 log.go:172] (0xc0022980a0) (3) Data frame handling I0622 21:40:26.840131 6 log.go:172] (0xc0022980a0) (3) Data frame sent I0622 21:40:26.840138 6 log.go:172] (0xc0026c00b0) Data frame received for 3 I0622 21:40:26.840144 6 log.go:172] (0xc0022980a0) (3) Data frame handling I0622 21:40:26.840487 6 log.go:172] (0xc0026c00b0) Data frame received for 5 I0622 21:40:26.840528 6 log.go:172] (0xc00222b7c0) (5) Data frame handling I0622 21:40:26.843121 6 log.go:172] (0xc0026c00b0) Data frame received for 1 I0622 21:40:26.843137 6 log.go:172] (0xc00222b720) (1) Data frame handling I0622 21:40:26.843153 6 log.go:172] (0xc00222b720) (1) Data frame sent I0622 21:40:26.843169 6 log.go:172] (0xc0026c00b0) (0xc00222b720) Stream removed, broadcasting: 1 I0622 21:40:26.843247 6 log.go:172] (0xc0026c00b0) (0xc00222b720) Stream removed, broadcasting: 1 I0622 21:40:26.843285 6 log.go:172] (0xc0026c00b0) (0xc0022980a0) Stream removed, broadcasting: 3 I0622 21:40:26.843305 6 log.go:172] (0xc0026c00b0) (0xc00222b7c0) Stream removed, broadcasting: 5 Jun 22 21:40:26.843: INFO: Found all expected endpoints: [netserver-0] I0622 21:40:26.843411 6 log.go:172] (0xc0026c00b0) Go away received Jun 22 21:40:26.846: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.35:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 21:40:26.846: INFO: >>> kubeConfig: /root/.kube/config I0622 21:40:26.876230 6 log.go:172] (0xc0027e0d10) (0xc002298780) Create stream I0622 21:40:26.876255 6 log.go:172] (0xc0027e0d10) (0xc002298780) Stream added, broadcasting: 1 I0622 21:40:26.878537 6 log.go:172] (0xc0027e0d10) Reply frame received for 1 I0622 21:40:26.878580 6 log.go:172] (0xc0027e0d10) (0xc00222ba40) Create stream I0622 21:40:26.878592 6 log.go:172] (0xc0027e0d10) (0xc00222ba40) Stream added, broadcasting: 3 I0622 21:40:26.880261 6 log.go:172] (0xc0027e0d10) Reply frame received for 3 I0622 21:40:26.880313 6 log.go:172] (0xc0027e0d10) (0xc001e041e0) Create stream I0622 21:40:26.880328 6 log.go:172] (0xc0027e0d10) (0xc001e041e0) Stream added, broadcasting: 5 I0622 21:40:26.881636 6 log.go:172] (0xc0027e0d10) Reply frame received for 5 I0622 21:40:26.952646 6 log.go:172] (0xc0027e0d10) Data frame received for 3 I0622 21:40:26.952722 6 log.go:172] (0xc00222ba40) (3) Data frame handling I0622 21:40:26.952753 6 log.go:172] (0xc00222ba40) (3) Data frame sent I0622 21:40:26.952791 6 log.go:172] (0xc0027e0d10) Data frame received for 3 I0622 21:40:26.952805 6 log.go:172] (0xc00222ba40) (3) Data frame handling I0622 21:40:26.952935 6 log.go:172] (0xc0027e0d10) Data frame received for 5 I0622 21:40:26.952966 6 log.go:172] (0xc001e041e0) (5) Data frame handling I0622 21:40:26.954467 6 log.go:172] (0xc0027e0d10) Data frame received for 1 I0622 21:40:26.954484 6 log.go:172] (0xc002298780) (1) Data frame handling I0622 21:40:26.954494 6 log.go:172] (0xc002298780) (1) Data frame sent I0622 21:40:26.954504 6 log.go:172] (0xc0027e0d10) (0xc002298780) Stream removed, broadcasting: 1 I0622 21:40:26.954527 6 log.go:172] (0xc0027e0d10) Go away received I0622 21:40:26.954623 6 log.go:172] (0xc0027e0d10) (0xc002298780) Stream removed, broadcasting: 1 I0622 21:40:26.954651 6 log.go:172] (0xc0027e0d10) (0xc00222ba40) Stream removed, broadcasting: 3 I0622 21:40:26.954667 6 log.go:172] (0xc0027e0d10) (0xc001e041e0) Stream removed, broadcasting: 5 Jun 22 21:40:26.954: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:26.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5059" for this suite. • [SLOW TEST:20.532 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1720,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:26.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:40:27.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3391' Jun 22 21:40:27.160: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 21:40:27.160: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jun 22 21:40:27.266: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-w4pnb] Jun 22 21:40:27.266: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-w4pnb" in namespace "kubectl-3391" to be "running and ready" Jun 22 21:40:27.307: INFO: Pod "e2e-test-httpd-rc-w4pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 41.104171ms Jun 22 21:40:29.311: INFO: Pod "e2e-test-httpd-rc-w4pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045101051s Jun 22 21:40:31.316: INFO: Pod "e2e-test-httpd-rc-w4pnb": Phase="Running", Reason="", readiness=true. Elapsed: 4.049739465s Jun 22 21:40:31.316: INFO: Pod "e2e-test-httpd-rc-w4pnb" satisfied condition "running and ready" Jun 22 21:40:31.316: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-w4pnb] Jun 22 21:40:31.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3391' Jun 22 21:40:35.699: INFO: stderr: "" Jun 22 21:40:35.699: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.206. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.206. Set the 'ServerName' directive globally to suppress this message\n[Mon Jun 22 21:40:30.012846 2020] [mpm_event:notice] [pid 1:tid 139852370246504] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Jun 22 21:40:30.012887 2020] [core:notice] [pid 1:tid 139852370246504] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Jun 22 21:40:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3391' Jun 22 21:40:35.811: INFO: stderr: "" Jun 22 21:40:35.811: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:35.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3391" for this suite. • [SLOW TEST:8.853 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":108,"skipped":1732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:35.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c6a752f4-726e-4b6d-b70b-31caeffaf57f STEP: Creating a pod to test consume configMaps Jun 22 21:40:35.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278" in namespace "configmap-3960" to be "success or failure" Jun 22 21:40:35.924: INFO: Pod "pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278": Phase="Pending", Reason="", readiness=false. Elapsed: 15.8379ms Jun 22 21:40:37.929: INFO: Pod "pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02036681s Jun 22 21:40:39.944: INFO: Pod "pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035526893s STEP: Saw pod success Jun 22 21:40:39.944: INFO: Pod "pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278" satisfied condition "success or failure" Jun 22 21:40:39.947: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278 container configmap-volume-test: STEP: delete the pod Jun 22 21:40:39.971: INFO: Waiting for pod pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278 to disappear Jun 22 21:40:39.974: INFO: Pod pod-configmaps-1fecaf25-460c-4c8a-b239-e735df39d278 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:39.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3960" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1764,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:39.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:56.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-951" for this suite. • [SLOW TEST:16.226 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":110,"skipped":1765,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:56.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:40:56.288: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:40:57.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9896" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":111,"skipped":1781,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:40:57.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:40:57.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:40:59.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 21:41:01.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728458857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:41:04.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:41:17.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9681" for this suite. STEP: Destroying namespace "webhook-9681-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.823 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":112,"skipped":1790,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:41:17.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:41:17.222: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:41:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3047" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:41:21.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:41:21.340: INFO: Creating ReplicaSet my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8 Jun 22 21:41:21.380: INFO: Pod name my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8: Found 0 pods out of 1 Jun 22 21:41:26.416: INFO: Pod name my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8: Found 1 pods out of 1 Jun 22 21:41:26.416: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8" is running Jun 22 21:41:26.420: INFO: Pod "my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8-6gmfw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:41:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:41:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:41:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:41:21 +0000 UTC Reason: Message:}]) Jun 22 21:41:26.420: INFO: Trying to dial the pod Jun 22 21:41:31.432: INFO: Controller my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8: Got expected result from replica 1 [my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8-6gmfw]: "my-hostname-basic-0a36b06b-6481-4961-ad52-5f0ddaf96fd8-6gmfw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:41:31.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3377" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":114,"skipped":1859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:41:31.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1622 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1622 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1622 Jun 22 21:41:31.594: INFO: Found 0 stateful pods, waiting for 1 Jun 22 21:41:41.615: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 22 21:41:41.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:41:41.930: INFO: stderr: "I0622 21:41:41.793474 1294 log.go:172] (0xc000228dc0) (0xc0007279a0) Create stream\nI0622 21:41:41.793540 1294 log.go:172] (0xc000228dc0) (0xc0007279a0) Stream added, broadcasting: 1\nI0622 21:41:41.796318 1294 log.go:172] (0xc000228dc0) Reply frame received for 1\nI0622 21:41:41.796381 1294 log.go:172] (0xc000228dc0) (0xc0009e0000) Create stream\nI0622 21:41:41.796398 1294 log.go:172] (0xc000228dc0) (0xc0009e0000) Stream added, broadcasting: 3\nI0622 21:41:41.797760 1294 log.go:172] (0xc000228dc0) Reply frame received for 3\nI0622 21:41:41.797805 1294 log.go:172] (0xc000228dc0) (0xc000727b80) Create stream\nI0622 21:41:41.797837 1294 log.go:172] (0xc000228dc0) (0xc000727b80) Stream added, broadcasting: 5\nI0622 21:41:41.798734 1294 log.go:172] (0xc000228dc0) Reply frame received for 5\nI0622 21:41:41.869689 1294 log.go:172] (0xc000228dc0) Data frame received for 5\nI0622 21:41:41.869717 1294 log.go:172] (0xc000727b80) (5) Data frame handling\nI0622 21:41:41.869735 1294 log.go:172] (0xc000727b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:41:41.920699 1294 log.go:172] (0xc000228dc0) Data frame received for 3\nI0622 21:41:41.920745 1294 log.go:172] (0xc0009e0000) (3) Data frame handling\nI0622 21:41:41.920777 1294 log.go:172] (0xc0009e0000) (3) Data frame sent\nI0622 21:41:41.921013 1294 log.go:172] (0xc000228dc0) Data frame received for 3\nI0622 21:41:41.921040 1294 log.go:172] (0xc0009e0000) (3) Data frame handling\nI0622 21:41:41.921067 1294 log.go:172] (0xc000228dc0) Data frame received for 5\nI0622 21:41:41.921084 1294 log.go:172] (0xc000727b80) (5) Data frame handling\nI0622 21:41:41.923361 1294 log.go:172] (0xc000228dc0) Data frame received for 1\nI0622 21:41:41.923393 1294 log.go:172] (0xc0007279a0) (1) Data frame handling\nI0622 21:41:41.923425 1294 log.go:172] (0xc0007279a0) (1) Data frame sent\nI0622 21:41:41.923453 1294 log.go:172] (0xc000228dc0) (0xc0007279a0) Stream removed, broadcasting: 1\nI0622 21:41:41.923652 1294 log.go:172] (0xc000228dc0) Go away received\nI0622 21:41:41.923925 1294 log.go:172] (0xc000228dc0) (0xc0007279a0) Stream removed, broadcasting: 1\nI0622 21:41:41.923946 1294 log.go:172] (0xc000228dc0) (0xc0009e0000) Stream removed, broadcasting: 3\nI0622 21:41:41.923978 1294 log.go:172] (0xc000228dc0) (0xc000727b80) Stream removed, broadcasting: 5\n" Jun 22 21:41:41.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:41:41.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 21:41:41.939: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 21:41:51.943: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 21:41:51.943: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:41:51.956: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999624s Jun 22 21:41:52.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995970711s Jun 22 21:41:53.964: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.99191619s Jun 22 21:41:54.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987901955s Jun 22 21:41:55.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984198139s Jun 22 21:41:56.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.980554604s Jun 22 21:41:57.991: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.976153027s Jun 22 21:41:58.996: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961163078s Jun 22 21:42:00.000: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.956055758s Jun 22 21:42:01.005: INFO: Verifying statefulset ss doesn't scale past 1 for another 951.92904ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1622 Jun 22 21:42:02.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:42:02.277: INFO: stderr: "I0622 21:42:02.168725 1316 log.go:172] (0xc000ad4630) (0xc000b4c000) Create stream\nI0622 21:42:02.168786 1316 log.go:172] (0xc000ad4630) (0xc000b4c000) Stream added, broadcasting: 1\nI0622 21:42:02.171403 1316 log.go:172] (0xc000ad4630) Reply frame received for 1\nI0622 21:42:02.171435 1316 log.go:172] (0xc000ad4630) (0xc0005cfb80) Create stream\nI0622 21:42:02.171442 1316 log.go:172] (0xc000ad4630) (0xc0005cfb80) Stream added, broadcasting: 3\nI0622 21:42:02.172211 1316 log.go:172] (0xc000ad4630) Reply frame received for 3\nI0622 21:42:02.172242 1316 log.go:172] (0xc000ad4630) (0xc000018000) Create stream\nI0622 21:42:02.172251 1316 log.go:172] (0xc000ad4630) (0xc000018000) Stream added, broadcasting: 5\nI0622 21:42:02.172984 1316 log.go:172] (0xc000ad4630) Reply frame received for 5\nI0622 21:42:02.268031 1316 log.go:172] (0xc000ad4630) Data frame received for 5\nI0622 21:42:02.268056 1316 log.go:172] (0xc000018000) (5) Data frame handling\nI0622 21:42:02.268096 1316 log.go:172] (0xc000018000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:42:02.268325 1316 log.go:172] (0xc000ad4630) Data frame received for 3\nI0622 21:42:02.268368 1316 log.go:172] (0xc0005cfb80) (3) Data frame handling\nI0622 21:42:02.268398 1316 log.go:172] (0xc0005cfb80) (3) Data frame sent\nI0622 21:42:02.268760 1316 log.go:172] (0xc000ad4630) Data frame received for 3\nI0622 21:42:02.268796 1316 log.go:172] (0xc0005cfb80) (3) Data frame handling\nI0622 21:42:02.268821 1316 log.go:172] (0xc000ad4630) Data frame received for 5\nI0622 21:42:02.268829 1316 log.go:172] (0xc000018000) (5) Data frame handling\nI0622 21:42:02.270473 1316 log.go:172] (0xc000ad4630) Data frame received for 1\nI0622 21:42:02.270503 1316 log.go:172] (0xc000b4c000) (1) Data frame handling\nI0622 21:42:02.270514 1316 log.go:172] (0xc000b4c000) (1) Data frame sent\nI0622 21:42:02.270530 1316 log.go:172] (0xc000ad4630) (0xc000b4c000) Stream removed, broadcasting: 1\nI0622 21:42:02.270545 1316 log.go:172] (0xc000ad4630) Go away received\nI0622 21:42:02.270851 1316 log.go:172] (0xc000ad4630) (0xc000b4c000) Stream removed, broadcasting: 1\nI0622 21:42:02.270864 1316 log.go:172] (0xc000ad4630) (0xc0005cfb80) Stream removed, broadcasting: 3\nI0622 21:42:02.270870 1316 log.go:172] (0xc000ad4630) (0xc000018000) Stream removed, broadcasting: 5\n" Jun 22 21:42:02.277: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:42:02.277: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:42:02.280: INFO: Found 1 stateful pods, waiting for 3 Jun 22 21:42:12.284: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:42:12.284: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 21:42:12.284: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 22 21:42:12.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:42:12.508: INFO: stderr: "I0622 21:42:12.428599 1337 log.go:172] (0xc0008ab290) (0xc0008a25a0) Create stream\nI0622 21:42:12.428644 1337 log.go:172] (0xc0008ab290) (0xc0008a25a0) Stream added, broadcasting: 1\nI0622 21:42:12.430698 1337 log.go:172] (0xc0008ab290) Reply frame received for 1\nI0622 21:42:12.430725 1337 log.go:172] (0xc0008ab290) (0xc000a3a5a0) Create stream\nI0622 21:42:12.430733 1337 log.go:172] (0xc0008ab290) (0xc000a3a5a0) Stream added, broadcasting: 3\nI0622 21:42:12.431318 1337 log.go:172] (0xc0008ab290) Reply frame received for 3\nI0622 21:42:12.431338 1337 log.go:172] (0xc0008ab290) (0xc0008a2640) Create stream\nI0622 21:42:12.431344 1337 log.go:172] (0xc0008ab290) (0xc0008a2640) Stream added, broadcasting: 5\nI0622 21:42:12.431876 1337 log.go:172] (0xc0008ab290) Reply frame received for 5\nI0622 21:42:12.502148 1337 log.go:172] (0xc0008ab290) Data frame received for 5\nI0622 21:42:12.502184 1337 log.go:172] (0xc0008a2640) (5) Data frame handling\nI0622 21:42:12.502194 1337 log.go:172] (0xc0008a2640) (5) Data frame sent\nI0622 21:42:12.502203 1337 log.go:172] (0xc0008ab290) Data frame received for 5\nI0622 21:42:12.502212 1337 log.go:172] (0xc0008a2640) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:42:12.502229 1337 log.go:172] (0xc0008ab290) Data frame received for 3\nI0622 21:42:12.502238 1337 log.go:172] (0xc000a3a5a0) (3) Data frame handling\nI0622 21:42:12.502251 1337 log.go:172] (0xc000a3a5a0) (3) Data frame sent\nI0622 21:42:12.502256 1337 log.go:172] (0xc0008ab290) Data frame received for 3\nI0622 21:42:12.502262 1337 log.go:172] (0xc000a3a5a0) (3) Data frame handling\nI0622 21:42:12.503793 1337 log.go:172] (0xc0008ab290) Data frame received for 1\nI0622 21:42:12.503812 1337 log.go:172] (0xc0008a25a0) (1) Data frame handling\nI0622 21:42:12.503825 1337 log.go:172] (0xc0008a25a0) (1) Data frame sent\nI0622 21:42:12.503847 1337 log.go:172] (0xc0008ab290) (0xc0008a25a0) Stream removed, broadcasting: 1\nI0622 21:42:12.503908 1337 log.go:172] (0xc0008ab290) Go away received\nI0622 21:42:12.504147 1337 log.go:172] (0xc0008ab290) (0xc0008a25a0) Stream removed, broadcasting: 1\nI0622 21:42:12.504161 1337 log.go:172] (0xc0008ab290) (0xc000a3a5a0) Stream removed, broadcasting: 3\nI0622 21:42:12.504168 1337 log.go:172] (0xc0008ab290) (0xc0008a2640) Stream removed, broadcasting: 5\n" Jun 22 21:42:12.509: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:42:12.509: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 21:42:12.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:42:12.817: INFO: stderr: "I0622 21:42:12.703773 1354 log.go:172] (0xc0000f13f0) (0xc000702000) Create stream\nI0622 21:42:12.703850 1354 log.go:172] (0xc0000f13f0) (0xc000702000) Stream added, broadcasting: 1\nI0622 21:42:12.707118 1354 log.go:172] (0xc0000f13f0) Reply frame received for 1\nI0622 21:42:12.707157 1354 log.go:172] (0xc0000f13f0) (0xc000702140) Create stream\nI0622 21:42:12.707166 1354 log.go:172] (0xc0000f13f0) (0xc000702140) Stream added, broadcasting: 3\nI0622 21:42:12.708227 1354 log.go:172] (0xc0000f13f0) Reply frame received for 3\nI0622 21:42:12.708268 1354 log.go:172] (0xc0000f13f0) (0xc0007021e0) Create stream\nI0622 21:42:12.708282 1354 log.go:172] (0xc0000f13f0) (0xc0007021e0) Stream added, broadcasting: 5\nI0622 21:42:12.709520 1354 log.go:172] (0xc0000f13f0) Reply frame received for 5\nI0622 21:42:12.766822 1354 log.go:172] (0xc0000f13f0) Data frame received for 5\nI0622 21:42:12.766856 1354 log.go:172] (0xc0007021e0) (5) Data frame handling\nI0622 21:42:12.766882 1354 log.go:172] (0xc0007021e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:42:12.807679 1354 log.go:172] (0xc0000f13f0) Data frame received for 3\nI0622 21:42:12.807711 1354 log.go:172] (0xc000702140) (3) Data frame handling\nI0622 21:42:12.807725 1354 log.go:172] (0xc000702140) (3) Data frame sent\nI0622 21:42:12.808002 1354 log.go:172] (0xc0000f13f0) Data frame received for 3\nI0622 21:42:12.808035 1354 log.go:172] (0xc000702140) (3) Data frame handling\nI0622 21:42:12.808262 1354 log.go:172] (0xc0000f13f0) Data frame received for 5\nI0622 21:42:12.808283 1354 log.go:172] (0xc0007021e0) (5) Data frame handling\nI0622 21:42:12.810435 1354 log.go:172] (0xc0000f13f0) Data frame received for 1\nI0622 21:42:12.810455 1354 log.go:172] (0xc000702000) (1) Data frame handling\nI0622 21:42:12.810467 1354 log.go:172] (0xc000702000) (1) Data frame sent\nI0622 21:42:12.810690 1354 log.go:172] (0xc0000f13f0) (0xc000702000) Stream removed, broadcasting: 1\nI0622 21:42:12.810743 1354 log.go:172] (0xc0000f13f0) Go away received\nI0622 21:42:12.811146 1354 log.go:172] (0xc0000f13f0) (0xc000702000) Stream removed, broadcasting: 1\nI0622 21:42:12.811164 1354 log.go:172] (0xc0000f13f0) (0xc000702140) Stream removed, broadcasting: 3\nI0622 21:42:12.811174 1354 log.go:172] (0xc0000f13f0) (0xc0007021e0) Stream removed, broadcasting: 5\n" Jun 22 21:42:12.818: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:42:12.818: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 21:42:12.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 21:42:13.069: INFO: stderr: "I0622 21:42:12.955992 1377 log.go:172] (0xc000115290) (0xc000908000) Create stream\nI0622 21:42:12.956045 1377 log.go:172] (0xc000115290) (0xc000908000) Stream added, broadcasting: 1\nI0622 21:42:12.958422 1377 log.go:172] (0xc000115290) Reply frame received for 1\nI0622 21:42:12.958460 1377 log.go:172] (0xc000115290) (0xc000685c20) Create stream\nI0622 21:42:12.958471 1377 log.go:172] (0xc000115290) (0xc000685c20) Stream added, broadcasting: 3\nI0622 21:42:12.959728 1377 log.go:172] (0xc000115290) Reply frame received for 3\nI0622 21:42:12.959798 1377 log.go:172] (0xc000115290) (0xc000308000) Create stream\nI0622 21:42:12.959823 1377 log.go:172] (0xc000115290) (0xc000308000) Stream added, broadcasting: 5\nI0622 21:42:12.960540 1377 log.go:172] (0xc000115290) Reply frame received for 5\nI0622 21:42:13.031046 1377 log.go:172] (0xc000115290) Data frame received for 5\nI0622 21:42:13.031072 1377 log.go:172] (0xc000308000) (5) Data frame handling\nI0622 21:42:13.031090 1377 log.go:172] (0xc000308000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 21:42:13.059109 1377 log.go:172] (0xc000115290) Data frame received for 3\nI0622 21:42:13.059131 1377 log.go:172] (0xc000685c20) (3) Data frame handling\nI0622 21:42:13.059143 1377 log.go:172] (0xc000685c20) (3) Data frame sent\nI0622 21:42:13.059446 1377 log.go:172] (0xc000115290) Data frame received for 3\nI0622 21:42:13.059478 1377 log.go:172] (0xc000685c20) (3) Data frame handling\nI0622 21:42:13.059595 1377 log.go:172] (0xc000115290) Data frame received for 5\nI0622 21:42:13.059603 1377 log.go:172] (0xc000308000) (5) Data frame handling\nI0622 21:42:13.062218 1377 log.go:172] (0xc000115290) Data frame received for 1\nI0622 21:42:13.062234 1377 log.go:172] (0xc000908000) (1) Data frame handling\nI0622 21:42:13.062243 1377 log.go:172] (0xc000908000) (1) Data frame sent\nI0622 21:42:13.062390 1377 log.go:172] (0xc000115290) (0xc000908000) Stream removed, broadcasting: 1\nI0622 21:42:13.062487 1377 log.go:172] (0xc000115290) Go away received\nI0622 21:42:13.062719 1377 log.go:172] (0xc000115290) (0xc000908000) Stream removed, broadcasting: 1\nI0622 21:42:13.062746 1377 log.go:172] (0xc000115290) (0xc000685c20) Stream removed, broadcasting: 3\nI0622 21:42:13.062752 1377 log.go:172] (0xc000115290) (0xc000308000) Stream removed, broadcasting: 5\n" Jun 22 21:42:13.069: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 21:42:13.069: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 21:42:13.069: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:42:13.072: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 22 21:42:23.080: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 21:42:23.080: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 21:42:23.080: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 21:42:23.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999436s Jun 22 21:42:24.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995840814s Jun 22 21:42:25.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990829337s Jun 22 21:42:26.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985597433s Jun 22 21:42:27.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979717563s Jun 22 21:42:28.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974331859s Jun 22 21:42:29.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.954422076s Jun 22 21:42:30.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.94749023s Jun 22 21:42:31.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.942479245s Jun 22 21:42:32.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.650713ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1622 Jun 22 21:42:33.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:42:33.412: INFO: stderr: "I0622 21:42:33.319010 1398 log.go:172] (0xc000018dc0) (0xc000689ea0) Create stream\nI0622 21:42:33.319073 1398 log.go:172] (0xc000018dc0) (0xc000689ea0) Stream added, broadcasting: 1\nI0622 21:42:33.322269 1398 log.go:172] (0xc000018dc0) Reply frame received for 1\nI0622 21:42:33.322343 1398 log.go:172] (0xc000018dc0) (0xc0004195e0) Create stream\nI0622 21:42:33.322370 1398 log.go:172] (0xc000018dc0) (0xc0004195e0) Stream added, broadcasting: 3\nI0622 21:42:33.323503 1398 log.go:172] (0xc000018dc0) Reply frame received for 3\nI0622 21:42:33.323542 1398 log.go:172] (0xc000018dc0) (0xc000b24000) Create stream\nI0622 21:42:33.323557 1398 log.go:172] (0xc000018dc0) (0xc000b24000) Stream added, broadcasting: 5\nI0622 21:42:33.324623 1398 log.go:172] (0xc000018dc0) Reply frame received for 5\nI0622 21:42:33.405312 1398 log.go:172] (0xc000018dc0) Data frame received for 3\nI0622 21:42:33.405339 1398 log.go:172] (0xc0004195e0) (3) Data frame handling\nI0622 21:42:33.405356 1398 log.go:172] (0xc0004195e0) (3) Data frame sent\nI0622 21:42:33.405382 1398 log.go:172] (0xc000018dc0) Data frame received for 5\nI0622 21:42:33.405404 1398 log.go:172] (0xc000b24000) (5) Data frame handling\nI0622 21:42:33.405422 1398 log.go:172] (0xc000b24000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:42:33.405586 1398 log.go:172] (0xc000018dc0) Data frame received for 3\nI0622 21:42:33.405598 1398 log.go:172] (0xc0004195e0) (3) Data frame handling\nI0622 21:42:33.405633 1398 log.go:172] (0xc000018dc0) Data frame received for 5\nI0622 21:42:33.405650 1398 log.go:172] (0xc000b24000) (5) Data frame handling\nI0622 21:42:33.407223 1398 log.go:172] (0xc000018dc0) Data frame received for 1\nI0622 21:42:33.407241 1398 log.go:172] (0xc000689ea0) (1) Data frame handling\nI0622 21:42:33.407256 1398 log.go:172] (0xc000689ea0) (1) Data frame sent\nI0622 21:42:33.407266 1398 log.go:172] (0xc000018dc0) (0xc000689ea0) Stream removed, broadcasting: 1\nI0622 21:42:33.407306 1398 log.go:172] (0xc000018dc0) Go away received\nI0622 21:42:33.407503 1398 log.go:172] (0xc000018dc0) (0xc000689ea0) Stream removed, broadcasting: 1\nI0622 21:42:33.407515 1398 log.go:172] (0xc000018dc0) (0xc0004195e0) Stream removed, broadcasting: 3\nI0622 21:42:33.407522 1398 log.go:172] (0xc000018dc0) (0xc000b24000) Stream removed, broadcasting: 5\n" Jun 22 21:42:33.412: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:42:33.412: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:42:33.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:42:33.624: INFO: stderr: "I0622 21:42:33.540893 1419 log.go:172] (0xc00055d130) (0xc000a2a000) Create stream\nI0622 21:42:33.540979 1419 log.go:172] (0xc00055d130) (0xc000a2a000) Stream added, broadcasting: 1\nI0622 21:42:33.543864 1419 log.go:172] (0xc00055d130) Reply frame received for 1\nI0622 21:42:33.543892 1419 log.go:172] (0xc00055d130) (0xc0006bbd60) Create stream\nI0622 21:42:33.543899 1419 log.go:172] (0xc00055d130) (0xc0006bbd60) Stream added, broadcasting: 3\nI0622 21:42:33.544746 1419 log.go:172] (0xc00055d130) Reply frame received for 3\nI0622 21:42:33.544764 1419 log.go:172] (0xc00055d130) (0xc0006bbf40) Create stream\nI0622 21:42:33.544770 1419 log.go:172] (0xc00055d130) (0xc0006bbf40) Stream added, broadcasting: 5\nI0622 21:42:33.546126 1419 log.go:172] (0xc00055d130) Reply frame received for 5\nI0622 21:42:33.616330 1419 log.go:172] (0xc00055d130) Data frame received for 3\nI0622 21:42:33.616372 1419 log.go:172] (0xc0006bbd60) (3) Data frame handling\nI0622 21:42:33.616385 1419 log.go:172] (0xc0006bbd60) (3) Data frame sent\nI0622 21:42:33.616392 1419 log.go:172] (0xc00055d130) Data frame received for 3\nI0622 21:42:33.616397 1419 log.go:172] (0xc0006bbd60) (3) Data frame handling\nI0622 21:42:33.616420 1419 log.go:172] (0xc00055d130) Data frame received for 5\nI0622 21:42:33.616427 1419 log.go:172] (0xc0006bbf40) (5) Data frame handling\nI0622 21:42:33.616438 1419 log.go:172] (0xc0006bbf40) (5) Data frame sent\nI0622 21:42:33.616446 1419 log.go:172] (0xc00055d130) Data frame received for 5\nI0622 21:42:33.616451 1419 log.go:172] (0xc0006bbf40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:42:33.617894 1419 log.go:172] (0xc00055d130) Data frame received for 1\nI0622 21:42:33.617918 1419 log.go:172] (0xc000a2a000) (1) Data frame handling\nI0622 21:42:33.617930 1419 log.go:172] (0xc000a2a000) (1) Data frame sent\nI0622 21:42:33.617942 1419 log.go:172] (0xc00055d130) (0xc000a2a000) Stream removed, broadcasting: 1\nI0622 21:42:33.617958 1419 log.go:172] (0xc00055d130) Go away received\nI0622 21:42:33.618523 1419 log.go:172] (0xc00055d130) (0xc000a2a000) Stream removed, broadcasting: 1\nI0622 21:42:33.618542 1419 log.go:172] (0xc00055d130) (0xc0006bbd60) Stream removed, broadcasting: 3\nI0622 21:42:33.618551 1419 log.go:172] (0xc00055d130) (0xc0006bbf40) Stream removed, broadcasting: 5\n" Jun 22 21:42:33.624: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:42:33.624: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:42:33.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1622 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 21:42:33.833: INFO: stderr: "I0622 21:42:33.771901 1440 log.go:172] (0xc0003d6dc0) (0xc00068bea0) Create stream\nI0622 21:42:33.771955 1440 log.go:172] (0xc0003d6dc0) (0xc00068bea0) Stream added, broadcasting: 1\nI0622 21:42:33.774787 1440 log.go:172] (0xc0003d6dc0) Reply frame received for 1\nI0622 21:42:33.774818 1440 log.go:172] (0xc0003d6dc0) (0xc00068bf40) Create stream\nI0622 21:42:33.774825 1440 log.go:172] (0xc0003d6dc0) (0xc00068bf40) Stream added, broadcasting: 3\nI0622 21:42:33.775544 1440 log.go:172] (0xc0003d6dc0) Reply frame received for 3\nI0622 21:42:33.775572 1440 log.go:172] (0xc0003d6dc0) (0xc0005d4780) Create stream\nI0622 21:42:33.775579 1440 log.go:172] (0xc0003d6dc0) (0xc0005d4780) Stream added, broadcasting: 5\nI0622 21:42:33.776505 1440 log.go:172] (0xc0003d6dc0) Reply frame received for 5\nI0622 21:42:33.826785 1440 log.go:172] (0xc0003d6dc0) Data frame received for 3\nI0622 21:42:33.826835 1440 log.go:172] (0xc00068bf40) (3) Data frame handling\nI0622 21:42:33.826847 1440 log.go:172] (0xc00068bf40) (3) Data frame sent\nI0622 21:42:33.826855 1440 log.go:172] (0xc0003d6dc0) Data frame received for 3\nI0622 21:42:33.826861 1440 log.go:172] (0xc00068bf40) (3) Data frame handling\nI0622 21:42:33.826874 1440 log.go:172] (0xc0003d6dc0) Data frame received for 5\nI0622 21:42:33.826883 1440 log.go:172] (0xc0005d4780) (5) Data frame handling\nI0622 21:42:33.826892 1440 log.go:172] (0xc0005d4780) (5) Data frame sent\nI0622 21:42:33.826899 1440 log.go:172] (0xc0003d6dc0) Data frame received for 5\nI0622 21:42:33.826905 1440 log.go:172] (0xc0005d4780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 21:42:33.828351 1440 log.go:172] (0xc0003d6dc0) Data frame received for 1\nI0622 21:42:33.828379 1440 log.go:172] (0xc00068bea0) (1) Data frame handling\nI0622 21:42:33.828395 1440 log.go:172] (0xc00068bea0) (1) Data frame sent\nI0622 21:42:33.828411 1440 log.go:172] (0xc0003d6dc0) (0xc00068bea0) Stream removed, broadcasting: 1\nI0622 21:42:33.828485 1440 log.go:172] (0xc0003d6dc0) Go away received\nI0622 21:42:33.828816 1440 log.go:172] (0xc0003d6dc0) (0xc00068bea0) Stream removed, broadcasting: 1\nI0622 21:42:33.828840 1440 log.go:172] (0xc0003d6dc0) (0xc00068bf40) Stream removed, broadcasting: 3\nI0622 21:42:33.828850 1440 log.go:172] (0xc0003d6dc0) (0xc0005d4780) Stream removed, broadcasting: 5\n" Jun 22 21:42:33.833: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 21:42:33.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 21:42:33.833: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 21:42:53.851: INFO: Deleting all statefulset in ns statefulset-1622 Jun 22 21:42:53.855: INFO: Scaling statefulset ss to 0 Jun 22 21:42:53.864: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:42:53.866: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:42:53.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1622" for this suite. • [SLOW TEST:82.467 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":115,"skipped":1891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:42:53.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 22 21:42:53.969: INFO: >>> kubeConfig: /root/.kube/config Jun 22 21:42:55.895: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:43:07.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9409" for this suite. • [SLOW TEST:13.454 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":116,"skipped":1943,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:43:07.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 22 21:43:11.487: INFO: &Pod{ObjectMeta:{send-events-dd46049a-9e54-4b5c-a5f2-60d2d3a565ff events-5543 /api/v1/namespaces/events-5543/pods/send-events-dd46049a-9e54-4b5c-a5f2-60d2d3a565ff f4baefe2-27c2-482b-8a7d-c4202445d8d2 26486497 0 2020-06-22 21:43:07 +0000 UTC map[name:foo time:462632001] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vqvkm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vqvkm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vqvkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 21:43:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.210,StartTime:2020-06-22 21:43:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 21:43:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0050903d331fc2f3367786b2a4639c7c067392cfa59e1583d2ab5a55ba192b1a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 22 21:43:13.494: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 22 21:43:15.499: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:43:15.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5543" for this suite. • [SLOW TEST:8.158 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":117,"skipped":1950,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:43:15.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jun 22 21:43:15.627: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 22 21:43:15.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:15.935: INFO: stderr: "" Jun 22 21:43:15.935: INFO: stdout: "service/agnhost-slave created\n" Jun 22 21:43:15.935: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 22 21:43:15.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:16.262: INFO: stderr: "" Jun 22 21:43:16.262: INFO: stdout: "service/agnhost-master created\n" Jun 22 21:43:16.262: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 22 21:43:16.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:16.570: INFO: stderr: "" Jun 22 21:43:16.571: INFO: stdout: "service/frontend created\n" Jun 22 21:43:16.571: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 22 21:43:16.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:17.392: INFO: stderr: "" Jun 22 21:43:17.392: INFO: stdout: "deployment.apps/frontend created\n" Jun 22 21:43:17.392: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 22 21:43:17.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:17.686: INFO: stderr: "" Jun 22 21:43:17.686: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 22 21:43:17.686: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 22 21:43:17.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2612' Jun 22 21:43:17.974: INFO: stderr: "" Jun 22 21:43:17.974: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 22 21:43:17.974: INFO: Waiting for all frontend pods to be Running. Jun 22 21:43:28.024: INFO: Waiting for frontend to serve content. Jun 22 21:43:28.040: INFO: Trying to add a new entry to the guestbook. Jun 22 21:43:28.051: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 22 21:43:28.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.212: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.212: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 22 21:43:28.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.355: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.355: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 21:43:28.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.512: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.512: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 21:43:28.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.624: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 21:43:28.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.735: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 21:43:28.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2612' Jun 22 21:43:28.833: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:43:28.833: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:43:28.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2612" for this suite. • [SLOW TEST:13.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":118,"skipped":1957,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:43:28.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-17e08bea-c8bb-43d0-937b-0c5b480ab749 in namespace container-probe-5112 Jun 22 21:43:35.399: INFO: Started pod test-webserver-17e08bea-c8bb-43d0-937b-0c5b480ab749 in namespace container-probe-5112 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 21:43:35.403: INFO: Initial restart count of pod test-webserver-17e08bea-c8bb-43d0-937b-0c5b480ab749 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:47:36.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5112" for this suite. • [SLOW TEST:247.228 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1960,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:47:36.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-047d3847-9eea-48ea-b431-355eccb485cf STEP: Creating configMap with name cm-test-opt-upd-33ac873d-6dca-4946-be58-07e47aaabf25 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-047d3847-9eea-48ea-b431-355eccb485cf STEP: Updating configmap cm-test-opt-upd-33ac873d-6dca-4946-be58-07e47aaabf25 STEP: Creating configMap with name cm-test-opt-create-5668d566-e5e4-4b30-80e7-54456c8bec31 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:47:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9491" for this suite. • [SLOW TEST:8.561 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1968,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:47:44.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 21:47:44.765: INFO: Waiting up to 5m0s for pod "pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7" in namespace "emptydir-3814" to be "success or failure" Jun 22 21:47:44.772: INFO: Pod "pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489202ms Jun 22 21:47:46.811: INFO: Pod "pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04570293s Jun 22 21:47:48.817: INFO: Pod "pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051778502s STEP: Saw pod success Jun 22 21:47:48.817: INFO: Pod "pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7" satisfied condition "success or failure" Jun 22 21:47:48.820: INFO: Trying to get logs from node jerma-worker2 pod pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7 container test-container: STEP: delete the pod Jun 22 21:47:48.856: INFO: Waiting for pod pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7 to disappear Jun 22 21:47:48.886: INFO: Pod pod-13fb4eb3-54d1-4ccf-94f8-ddbd2c3972e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:47:48.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3814" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1969,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:47:48.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 22 21:47:49.010: INFO: >>> kubeConfig: /root/.kube/config Jun 22 21:47:51.931: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:01.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3615" for this suite. • [SLOW TEST:12.539 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":122,"skipped":1985,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:01.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:05.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4992" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1994,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:05.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 22 21:48:05.662: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 22 21:48:10.667: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:11.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2715" for this suite. • [SLOW TEST:6.213 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":124,"skipped":2006,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:11.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:48:11.902: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880" in namespace "security-context-test-165" to be "success or failure" Jun 22 21:48:11.905: INFO: Pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272064ms Jun 22 21:48:14.067: INFO: Pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164334358s Jun 22 21:48:16.070: INFO: Pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1678127s Jun 22 21:48:16.070: INFO: Pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880" satisfied condition "success or failure" Jun 22 21:48:16.075: INFO: Got logs for pod "busybox-privileged-false-37026668-851a-4efd-b9d4-f104a1d3a880": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:16.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-165" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2011,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:16.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368 Jun 22 21:48:16.221: INFO: Pod name my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368: Found 0 pods out of 1 Jun 22 21:48:21.246: INFO: Pod name my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368: Found 1 pods out of 1 Jun 22 21:48:21.246: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368" are running Jun 22 21:48:21.258: INFO: Pod "my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368-qgvhd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:48:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:48:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:48:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 21:48:16 +0000 UTC Reason: Message:}]) Jun 22 21:48:21.258: INFO: Trying to dial the pod Jun 22 21:48:26.270: INFO: Controller my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368: Got expected result from replica 1 [my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368-qgvhd]: "my-hostname-basic-90f10a2e-5080-418c-8053-09afa9490368-qgvhd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:26.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1365" for this suite. • [SLOW TEST:10.160 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":126,"skipped":2014,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:26.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:48:26.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3" in namespace "downward-api-1217" to be "success or failure" Jun 22 21:48:26.386: INFO: Pod "downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.555841ms Jun 22 21:48:28.432: INFO: Pod "downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068490704s Jun 22 21:48:30.435: INFO: Pod "downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071627995s STEP: Saw pod success Jun 22 21:48:30.435: INFO: Pod "downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3" satisfied condition "success or failure" Jun 22 21:48:30.438: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3 container client-container: STEP: delete the pod Jun 22 21:48:30.479: INFO: Waiting for pod downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3 to disappear Jun 22 21:48:30.554: INFO: Pod downwardapi-volume-d5714015-3065-4b47-b90d-49b84862c4d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:30.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1217" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:30.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 22 21:48:30.742: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:38.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5067" for this suite. • [SLOW TEST:7.898 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":128,"skipped":2065,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:38.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jun 22 21:48:38.547: INFO: Waiting up to 5m0s for pod "var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0" in namespace "var-expansion-4051" to be "success or failure" Jun 22 21:48:38.563: INFO: Pod "var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.606932ms Jun 22 21:48:40.567: INFO: Pod "var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020432991s Jun 22 21:48:42.572: INFO: Pod "var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024892216s STEP: Saw pod success Jun 22 21:48:42.572: INFO: Pod "var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0" satisfied condition "success or failure" Jun 22 21:48:42.575: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0 container dapi-container: STEP: delete the pod Jun 22 21:48:42.615: INFO: Waiting for pod var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0 to disappear Jun 22 21:48:42.629: INFO: Pod var-expansion-f1ac85d1-4cb2-4945-ad25-1de1565676f0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:48:42.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4051" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2066,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:48:42.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:48:42.755: INFO: Create a RollingUpdate DaemonSet Jun 22 21:48:42.758: INFO: Check that daemon pods launch on every node of the cluster Jun 22 21:48:42.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:42.767: INFO: Number of nodes with available pods: 0 Jun 22 21:48:42.767: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:48:43.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:43.941: INFO: Number of nodes with available pods: 0 Jun 22 21:48:43.941: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:48:44.897: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:44.900: INFO: Number of nodes with available pods: 0 Jun 22 21:48:44.900: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:48:45.855: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:45.858: INFO: Number of nodes with available pods: 0 Jun 22 21:48:45.858: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:48:46.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:46.776: INFO: Number of nodes with available pods: 0 Jun 22 21:48:46.776: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:48:47.775: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:48:47.779: INFO: Number of nodes with available pods: 2 Jun 22 21:48:47.779: INFO: Number of running nodes: 2, number of available pods: 2 Jun 22 21:48:47.779: INFO: Update the DaemonSet to trigger a rollout Jun 22 21:48:47.784: INFO: Updating DaemonSet daemon-set Jun 22 21:48:59.821: INFO: Roll back the DaemonSet before rollout is complete Jun 22 21:48:59.828: INFO: Updating DaemonSet daemon-set Jun 22 21:48:59.828: INFO: Make sure DaemonSet rollback is complete Jun 22 21:48:59.834: INFO: Wrong image for pod: daemon-set-bhbnl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 22 21:48:59.834: INFO: Pod daemon-set-bhbnl is not available Jun 22 21:48:59.855: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:49:00.859: INFO: Wrong image for pod: daemon-set-bhbnl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 22 21:49:00.859: INFO: Pod daemon-set-bhbnl is not available Jun 22 21:49:00.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:49:01.978: INFO: Wrong image for pod: daemon-set-bhbnl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 22 21:49:01.978: INFO: Pod daemon-set-bhbnl is not available Jun 22 21:49:01.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:49:02.859: INFO: Pod daemon-set-bhf6q is not available Jun 22 21:49:02.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4321, will wait for the garbage collector to delete the pods Jun 22 21:49:02.930: INFO: Deleting DaemonSet.extensions daemon-set took: 6.866239ms Jun 22 21:49:03.230: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240397ms Jun 22 21:49:06.434: INFO: Number of nodes with available pods: 0 Jun 22 21:49:06.434: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 21:49:06.437: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4321/daemonsets","resourceVersion":"26488140"},"items":null} Jun 22 21:49:06.440: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4321/pods","resourceVersion":"26488140"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:06.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4321" for this suite. • [SLOW TEST:23.820 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":130,"skipped":2073,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:06.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:49:06.523: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 22 21:49:07.582: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:08.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9945" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":131,"skipped":2073,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:08.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-hcp7 STEP: Creating a pod to test atomic-volume-subpath Jun 22 21:49:09.167: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hcp7" in namespace "subpath-4059" to be "success or failure" Jun 22 21:49:09.170: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228237ms Jun 22 21:49:11.175: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008357474s Jun 22 21:49:13.180: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 4.012788567s Jun 22 21:49:15.184: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 6.016710614s Jun 22 21:49:17.188: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 8.021021219s Jun 22 21:49:19.194: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 10.027040191s Jun 22 21:49:21.197: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 12.030473215s Jun 22 21:49:23.201: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 14.034621126s Jun 22 21:49:25.206: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 16.039655283s Jun 22 21:49:27.211: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 18.044534946s Jun 22 21:49:29.215: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 20.048156137s Jun 22 21:49:31.219: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Running", Reason="", readiness=true. Elapsed: 22.051990488s Jun 22 21:49:33.223: INFO: Pod "pod-subpath-test-configmap-hcp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055986316s STEP: Saw pod success Jun 22 21:49:33.223: INFO: Pod "pod-subpath-test-configmap-hcp7" satisfied condition "success or failure" Jun 22 21:49:33.225: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-hcp7 container test-container-subpath-configmap-hcp7: STEP: delete the pod Jun 22 21:49:33.243: INFO: Waiting for pod pod-subpath-test-configmap-hcp7 to disappear Jun 22 21:49:33.253: INFO: Pod pod-subpath-test-configmap-hcp7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-hcp7 Jun 22 21:49:33.253: INFO: Deleting pod "pod-subpath-test-configmap-hcp7" in namespace "subpath-4059" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:33.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4059" for this suite. • [SLOW TEST:24.325 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":132,"skipped":2086,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:33.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:49:33.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:49:36.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459373, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459373, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:49:39.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:39.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-605" for this suite. STEP: Destroying namespace "webhook-605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":133,"skipped":2096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:39.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 21:49:39.322: INFO: Waiting up to 5m0s for pod "pod-0f6c2a2d-a463-4844-95bb-710458df723f" in namespace "emptydir-1483" to be "success or failure" Jun 22 21:49:39.332: INFO: Pod "pod-0f6c2a2d-a463-4844-95bb-710458df723f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.898987ms Jun 22 21:49:41.336: INFO: Pod "pod-0f6c2a2d-a463-4844-95bb-710458df723f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014157704s Jun 22 21:49:43.341: INFO: Pod "pod-0f6c2a2d-a463-4844-95bb-710458df723f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018631323s STEP: Saw pod success Jun 22 21:49:43.341: INFO: Pod "pod-0f6c2a2d-a463-4844-95bb-710458df723f" satisfied condition "success or failure" Jun 22 21:49:43.344: INFO: Trying to get logs from node jerma-worker2 pod pod-0f6c2a2d-a463-4844-95bb-710458df723f container test-container: STEP: delete the pod Jun 22 21:49:43.377: INFO: Waiting for pod pod-0f6c2a2d-a463-4844-95bb-710458df723f to disappear Jun 22 21:49:43.386: INFO: Pod pod-0f6c2a2d-a463-4844-95bb-710458df723f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:43.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1483" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2120,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:43.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-285e5138-c228-47d1-910d-8f17792c9992 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-285e5138-c228-47d1-910d-8f17792c9992 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:49.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2890" for this suite. • [SLOW TEST:6.100 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2134,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:49.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 21:49:49.558: INFO: Waiting up to 5m0s for pod "pod-bf1a6b28-182f-4517-ba09-660d49e19f5f" in namespace "emptydir-9612" to be "success or failure" Jun 22 21:49:49.560: INFO: Pod "pod-bf1a6b28-182f-4517-ba09-660d49e19f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543321ms Jun 22 21:49:51.564: INFO: Pod "pod-bf1a6b28-182f-4517-ba09-660d49e19f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006447957s Jun 22 21:49:53.568: INFO: Pod "pod-bf1a6b28-182f-4517-ba09-660d49e19f5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01072681s STEP: Saw pod success Jun 22 21:49:53.568: INFO: Pod "pod-bf1a6b28-182f-4517-ba09-660d49e19f5f" satisfied condition "success or failure" Jun 22 21:49:53.572: INFO: Trying to get logs from node jerma-worker pod pod-bf1a6b28-182f-4517-ba09-660d49e19f5f container test-container: STEP: delete the pod Jun 22 21:49:53.617: INFO: Waiting for pod pod-bf1a6b28-182f-4517-ba09-660d49e19f5f to disappear Jun 22 21:49:53.626: INFO: Pod pod-bf1a6b28-182f-4517-ba09-660d49e19f5f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:53.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9612" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2138,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:53.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 22 21:49:58.778: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:58.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2839" for this suite. • [SLOW TEST:5.245 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":137,"skipped":2142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:58.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:49:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1022" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":138,"skipped":2261,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:49:58.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 22 21:49:59.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8388' Jun 22 21:49:59.348: INFO: stderr: "" Jun 22 21:49:59.348: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 21:49:59.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8388' Jun 22 21:49:59.467: INFO: stderr: "" Jun 22 21:49:59.467: INFO: stdout: "update-demo-nautilus-7mnt4 update-demo-nautilus-cwcx6 " Jun 22 21:49:59.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mnt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8388' Jun 22 21:49:59.569: INFO: stderr: "" Jun 22 21:49:59.569: INFO: stdout: "" Jun 22 21:49:59.569: INFO: update-demo-nautilus-7mnt4 is created but not running Jun 22 21:50:04.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8388' Jun 22 21:50:04.664: INFO: stderr: "" Jun 22 21:50:04.664: INFO: stdout: "update-demo-nautilus-7mnt4 update-demo-nautilus-cwcx6 " Jun 22 21:50:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mnt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8388' Jun 22 21:50:04.834: INFO: stderr: "" Jun 22 21:50:04.834: INFO: stdout: "true" Jun 22 21:50:04.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mnt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8388' Jun 22 21:50:04.924: INFO: stderr: "" Jun 22 21:50:04.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 21:50:04.924: INFO: validating pod update-demo-nautilus-7mnt4 Jun 22 21:50:04.959: INFO: got data: { "image": "nautilus.jpg" } Jun 22 21:50:04.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 21:50:04.959: INFO: update-demo-nautilus-7mnt4 is verified up and running Jun 22 21:50:04.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwcx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8388' Jun 22 21:50:05.060: INFO: stderr: "" Jun 22 21:50:05.060: INFO: stdout: "true" Jun 22 21:50:05.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwcx6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8388' Jun 22 21:50:05.152: INFO: stderr: "" Jun 22 21:50:05.152: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 21:50:05.152: INFO: validating pod update-demo-nautilus-cwcx6 Jun 22 21:50:05.180: INFO: got data: { "image": "nautilus.jpg" } Jun 22 21:50:05.180: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 21:50:05.180: INFO: update-demo-nautilus-cwcx6 is verified up and running STEP: using delete to clean up resources Jun 22 21:50:05.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8388' Jun 22 21:50:05.294: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 21:50:05.294: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 21:50:05.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8388' Jun 22 21:50:05.403: INFO: stderr: "No resources found in kubectl-8388 namespace.\n" Jun 22 21:50:05.403: INFO: stdout: "" Jun 22 21:50:05.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8388 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 21:50:05.523: INFO: stderr: "" Jun 22 21:50:05.523: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:50:05.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8388" for this suite. • [SLOW TEST:6.556 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":139,"skipped":2262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:50:05.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jun 22 21:50:05.808: INFO: Waiting up to 5m0s for pod "client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c" in namespace "containers-5263" to be "success or failure" Jun 22 21:50:05.975: INFO: Pod "client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 167.462003ms Jun 22 21:50:07.979: INFO: Pod "client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171535324s Jun 22 21:50:09.983: INFO: Pod "client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174904165s STEP: Saw pod success Jun 22 21:50:09.983: INFO: Pod "client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c" satisfied condition "success or failure" Jun 22 21:50:09.986: INFO: Trying to get logs from node jerma-worker2 pod client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c container test-container: STEP: delete the pod Jun 22 21:50:10.036: INFO: Waiting for pod client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c to disappear Jun 22 21:50:10.082: INFO: Pod client-containers-a9bb89d7-acb4-41af-8f25-7ddd475cce6c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:50:10.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5263" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:50:10.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:50:21.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4472" for this suite. • [SLOW TEST:11.215 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":141,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:50:21.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7669 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7669;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7669 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7669;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7669.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7669.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7669.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7669.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7669.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7669.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.179_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7669 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7669;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7669 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7669;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7669.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7669.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7669.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7669.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7669.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7669.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7669.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7669.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.179_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:50:29.554: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.557: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.563: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.572: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.596: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.599: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.602: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.605: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.608: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.611: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.614: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:29.636: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:34.640: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.644: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.647: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.655: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.658: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.661: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.664: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.693: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.696: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.698: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.705: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:34.738: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:39.642: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.646: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.649: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.655: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.658: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.661: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.664: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.686: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.689: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.695: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.703: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.705: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.707: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.709: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:39.722: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:44.641: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.645: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.656: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.662: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.665: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.667: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.684: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.687: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.690: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.693: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.696: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.702: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.705: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:44.720: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:49.641: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.644: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.647: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.649: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.654: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.656: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.659: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.678: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.681: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.683: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.689: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.692: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.695: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.697: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:49.711: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:54.642: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.646: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.649: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.654: INFO: Unable to read wheezy_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.657: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.660: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.662: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.695: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.698: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.700: INFO: Unable to read jessie_udp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669 from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.706: INFO: Unable to read jessie_udp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc from pod dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569: the server could not find the requested resource (get pods dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569) Jun 22 21:50:54.737: INFO: Lookups using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7669 wheezy_tcp@dns-test-service.dns-7669 wheezy_udp@dns-test-service.dns-7669.svc wheezy_tcp@dns-test-service.dns-7669.svc wheezy_udp@_http._tcp.dns-test-service.dns-7669.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7669.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7669 jessie_tcp@dns-test-service.dns-7669 jessie_udp@dns-test-service.dns-7669.svc jessie_tcp@dns-test-service.dns-7669.svc jessie_udp@_http._tcp.dns-test-service.dns-7669.svc jessie_tcp@_http._tcp.dns-test-service.dns-7669.svc] Jun 22 21:50:59.720: INFO: DNS probes using dns-7669/dns-test-1364f63a-bdad-40f0-a54f-6f7f43858569 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:51:00.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7669" for this suite. • [SLOW TEST:38.990 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":142,"skipped":2367,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:51:00.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:51:17.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7000" for this suite. • [SLOW TEST:17.196 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":143,"skipped":2371,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:51:17.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:51:18.162: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:51:20.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459478, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459478, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459478, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459478, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:51:23.252: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:51:23.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9072-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:51:24.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1514" for this suite. STEP: Destroying namespace "webhook-1514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.985 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":144,"skipped":2376,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:51:24.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:51:25.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:51:27.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459485, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459485, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459485, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459485, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:51:30.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:51:40.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8997" for this suite. STEP: Destroying namespace "webhook-8997-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.265 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":145,"skipped":2388,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:51:40.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:51:40.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd" in namespace "downward-api-9183" to be "success or failure" Jun 22 21:51:40.857: INFO: Pod "downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.670935ms Jun 22 21:51:42.861: INFO: Pod "downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050532451s Jun 22 21:51:44.864: INFO: Pod "downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053623277s STEP: Saw pod success Jun 22 21:51:44.864: INFO: Pod "downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd" satisfied condition "success or failure" Jun 22 21:51:44.866: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd container client-container: STEP: delete the pod Jun 22 21:51:44.948: INFO: Waiting for pod downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd to disappear Jun 22 21:51:44.958: INFO: Pod downwardapi-volume-d3ab3cb9-879d-488b-ba4b-0080f25e16fd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:51:44.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9183" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2393,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:51:44.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-5hkn STEP: Creating a pod to test atomic-volume-subpath Jun 22 21:51:45.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5hkn" in namespace "subpath-3880" to be "success or failure" Jun 22 21:51:45.285: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659693ms Jun 22 21:51:47.289: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011234284s Jun 22 21:51:49.293: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 4.015137238s Jun 22 21:51:51.301: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 6.022334098s Jun 22 21:51:53.305: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 8.026316025s Jun 22 21:51:55.309: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 10.030842618s Jun 22 21:51:57.314: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 12.036067759s Jun 22 21:51:59.319: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 14.04055271s Jun 22 21:52:01.323: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 16.045216833s Jun 22 21:52:03.328: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 18.049755039s Jun 22 21:52:05.332: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 20.054021889s Jun 22 21:52:07.344: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Running", Reason="", readiness=true. Elapsed: 22.066027339s Jun 22 21:52:09.349: INFO: Pod "pod-subpath-test-downwardapi-5hkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071244742s STEP: Saw pod success Jun 22 21:52:09.350: INFO: Pod "pod-subpath-test-downwardapi-5hkn" satisfied condition "success or failure" Jun 22 21:52:09.378: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-5hkn container test-container-subpath-downwardapi-5hkn: STEP: delete the pod Jun 22 21:52:09.411: INFO: Waiting for pod pod-subpath-test-downwardapi-5hkn to disappear Jun 22 21:52:09.418: INFO: Pod pod-subpath-test-downwardapi-5hkn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5hkn Jun 22 21:52:09.418: INFO: Deleting pod "pod-subpath-test-downwardapi-5hkn" in namespace "subpath-3880" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:52:09.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3880" for this suite. • [SLOW TEST:24.461 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":147,"skipped":2396,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:52:09.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:52:27.509: INFO: Container started at 2020-06-22 21:52:11 +0000 UTC, pod became ready at 2020-06-22 21:52:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:52:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2667" for this suite. • [SLOW TEST:18.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2407,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:52:27.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-68544273-f0a2-4da9-9ee4-4c4994e59746 STEP: Creating a pod to test consume configMaps Jun 22 21:52:27.591: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7" in namespace "projected-4376" to be "success or failure" Jun 22 21:52:27.594: INFO: Pod "pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.689194ms Jun 22 21:52:29.598: INFO: Pod "pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007435607s Jun 22 21:52:31.603: INFO: Pod "pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012079164s STEP: Saw pod success Jun 22 21:52:31.603: INFO: Pod "pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7" satisfied condition "success or failure" Jun 22 21:52:31.606: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7 container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:52:31.641: INFO: Waiting for pod pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7 to disappear Jun 22 21:52:31.655: INFO: Pod pod-projected-configmaps-6da9b5b9-c20d-4895-9623-f5987c9a56a7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:52:31.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4376" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2410,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:52:31.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:52:31.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9" in namespace "projected-2755" to be "success or failure" Jun 22 21:52:31.760: INFO: Pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.308754ms Jun 22 21:52:33.822: INFO: Pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07543614s Jun 22 21:52:35.827: INFO: Pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9": Phase="Running", Reason="", readiness=true. Elapsed: 4.079742792s Jun 22 21:52:37.831: INFO: Pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084514773s STEP: Saw pod success Jun 22 21:52:37.831: INFO: Pod "downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9" satisfied condition "success or failure" Jun 22 21:52:37.834: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9 container client-container: STEP: delete the pod Jun 22 21:52:37.866: INFO: Waiting for pod downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9 to disappear Jun 22 21:52:37.877: INFO: Pod downwardapi-volume-74b868fe-389b-4d74-a1af-e271efc2eae9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:52:37.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2755" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2421,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:52:37.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-372.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-372.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-372.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-372.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-372.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-372.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-372.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-372.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-372.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-372.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-372.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-372.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:52:44.103: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.108: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.111: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.128: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.131: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.133: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.136: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:44.154: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:52:49.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.193: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.197: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.216: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.219: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.222: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.225: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:49.241: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:52:54.158: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.166: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.193: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.199: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.202: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:54.219: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:52:59.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.257: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.260: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.262: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.264: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:52:59.281: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:53:04.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.190: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.192: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.195: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:04.214: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:53:09.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.214: INFO: Unable to read jessie_udp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.220: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local from pod dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9: the server could not find the requested resource (get pods dns-test-666c0843-792e-4982-aa18-bfb83d7560f9) Jun 22 21:53:09.236: INFO: Lookups using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 failed for: [wheezy_udp@dns-test-service.dns-372.svc.cluster.local wheezy_tcp@dns-test-service.dns-372.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_udp@dns-test-service.dns-372.svc.cluster.local jessie_tcp@dns-test-service.dns-372.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-372.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-372.svc.cluster.local] Jun 22 21:53:14.220: INFO: DNS probes using dns-372/dns-test-666c0843-792e-4982-aa18-bfb83d7560f9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:53:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-372" for this suite. • [SLOW TEST:36.945 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":151,"skipped":2440,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:53:14.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b0a00b21-a34b-4f3b-bc9b-58a5b7b74fb6 in namespace container-probe-3041 Jun 22 21:53:19.207: INFO: Started pod liveness-b0a00b21-a34b-4f3b-bc9b-58a5b7b74fb6 in namespace container-probe-3041 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 21:53:19.211: INFO: Initial restart count of pod liveness-b0a00b21-a34b-4f3b-bc9b-58a5b7b74fb6 is 0 Jun 22 21:53:43.289: INFO: Restart count of pod container-probe-3041/liveness-b0a00b21-a34b-4f3b-bc9b-58a5b7b74fb6 is now 1 (24.078488215s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:53:43.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3041" for this suite. • [SLOW TEST:28.516 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2449,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:53:43.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 22 21:53:49.953: INFO: Successfully updated pod "adopt-release-dprtj" STEP: Checking that the Job readopts the Pod Jun 22 21:53:49.953: INFO: Waiting up to 15m0s for pod "adopt-release-dprtj" in namespace "job-5097" to be "adopted" Jun 22 21:53:49.963: INFO: Pod "adopt-release-dprtj": Phase="Running", Reason="", readiness=true. Elapsed: 9.939098ms Jun 22 21:53:51.967: INFO: Pod "adopt-release-dprtj": Phase="Running", Reason="", readiness=true. Elapsed: 2.014283239s Jun 22 21:53:51.967: INFO: Pod "adopt-release-dprtj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 22 21:53:52.476: INFO: Successfully updated pod "adopt-release-dprtj" STEP: Checking that the Job releases the Pod Jun 22 21:53:52.476: INFO: Waiting up to 15m0s for pod "adopt-release-dprtj" in namespace "job-5097" to be "released" Jun 22 21:53:52.484: INFO: Pod "adopt-release-dprtj": Phase="Running", Reason="", readiness=true. Elapsed: 7.430766ms Jun 22 21:53:54.488: INFO: Pod "adopt-release-dprtj": Phase="Running", Reason="", readiness=true. Elapsed: 2.011275871s Jun 22 21:53:54.488: INFO: Pod "adopt-release-dprtj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:53:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5097" for this suite. • [SLOW TEST:11.282 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":153,"skipped":2461,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:53:54.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f2300972-81e5-453a-812d-5bf845e8a228 STEP: Creating a pod to test consume configMaps Jun 22 21:53:54.687: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab" in namespace "projected-6473" to be "success or failure" Jun 22 21:53:54.691: INFO: Pod "pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.849532ms Jun 22 21:53:56.695: INFO: Pod "pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008060088s Jun 22 21:53:58.699: INFO: Pod "pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012001159s STEP: Saw pod success Jun 22 21:53:58.699: INFO: Pod "pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab" satisfied condition "success or failure" Jun 22 21:53:58.702: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab container projected-configmap-volume-test: STEP: delete the pod Jun 22 21:53:58.740: INFO: Waiting for pod pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab to disappear Jun 22 21:53:58.786: INFO: Pod pod-projected-configmaps-d53224ee-f360-468f-bfb7-9da0299985ab no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:53:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6473" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2464,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:53:58.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 21:53:58.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2881' Jun 22 21:54:01.803: INFO: stderr: "" Jun 22 21:54:01.803: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Jun 22 21:54:01.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2881' Jun 22 21:54:09.508: INFO: stderr: "" Jun 22 21:54:09.508: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:09.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2881" for this suite. • [SLOW TEST:10.719 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":155,"skipped":2485,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:09.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:09.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5080" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":156,"skipped":2486,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:09.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jun 22 21:54:09.693: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:09.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6774" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":157,"skipped":2494,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:09.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:54:09.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef" in namespace "projected-7527" to be "success or failure" Jun 22 21:54:09.910: INFO: Pod "downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683801ms Jun 22 21:54:11.913: INFO: Pod "downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006823784s Jun 22 21:54:13.918: INFO: Pod "downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011531287s STEP: Saw pod success Jun 22 21:54:13.918: INFO: Pod "downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef" satisfied condition "success or failure" Jun 22 21:54:13.921: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef container client-container: STEP: delete the pod Jun 22 21:54:13.968: INFO: Waiting for pod downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef to disappear Jun 22 21:54:13.975: INFO: Pod downwardapi-volume-d9f74147-6c90-4630-9449-342d338a28ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:13.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7527" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:13.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-97ts STEP: Creating a pod to test atomic-volume-subpath Jun 22 21:54:14.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-97ts" in namespace "subpath-4231" to be "success or failure" Jun 22 21:54:14.053: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517884ms Jun 22 21:54:16.058: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008254476s Jun 22 21:54:18.062: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 4.012670544s Jun 22 21:54:20.066: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 6.015849539s Jun 22 21:54:22.069: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 8.019092114s Jun 22 21:54:24.073: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 10.023259042s Jun 22 21:54:26.077: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 12.027162702s Jun 22 21:54:28.080: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 14.030123641s Jun 22 21:54:30.159: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 16.109156417s Jun 22 21:54:32.163: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 18.112891963s Jun 22 21:54:34.167: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 20.116982597s Jun 22 21:54:36.171: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 22.121429085s Jun 22 21:54:38.174: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Running", Reason="", readiness=true. Elapsed: 24.123853998s Jun 22 21:54:40.178: INFO: Pod "pod-subpath-test-configmap-97ts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.128508814s STEP: Saw pod success Jun 22 21:54:40.178: INFO: Pod "pod-subpath-test-configmap-97ts" satisfied condition "success or failure" Jun 22 21:54:40.181: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-97ts container test-container-subpath-configmap-97ts: STEP: delete the pod Jun 22 21:54:40.223: INFO: Waiting for pod pod-subpath-test-configmap-97ts to disappear Jun 22 21:54:40.228: INFO: Pod pod-subpath-test-configmap-97ts no longer exists STEP: Deleting pod pod-subpath-test-configmap-97ts Jun 22 21:54:40.228: INFO: Deleting pod "pod-subpath-test-configmap-97ts" in namespace "subpath-4231" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:40.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4231" for this suite. • [SLOW TEST:26.256 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":159,"skipped":2531,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:40.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jun 22 21:54:40.332: INFO: Waiting up to 5m0s for pod "var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490" in namespace "var-expansion-5007" to be "success or failure" Jun 22 21:54:40.347: INFO: Pod "var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490": Phase="Pending", Reason="", readiness=false. Elapsed: 15.670488ms Jun 22 21:54:42.441: INFO: Pod "var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109381735s Jun 22 21:54:44.446: INFO: Pod "var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113927157s STEP: Saw pod success Jun 22 21:54:44.446: INFO: Pod "var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490" satisfied condition "success or failure" Jun 22 21:54:44.449: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490 container dapi-container: STEP: delete the pod Jun 22 21:54:44.488: INFO: Waiting for pod var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490 to disappear Jun 22 21:54:44.491: INFO: Pod var-expansion-6c6270c8-7224-4bb0-b435-31b262dd8490 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:44.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5007" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:44.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 22 21:54:49.127: INFO: Successfully updated pod "labelsupdatee5d96bf6-d326-4a33-a7cc-1d6b666195c9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8124" for this suite. • [SLOW TEST:6.663 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:51.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 22 21:54:51.268: INFO: Waiting up to 5m0s for pod "downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50" in namespace "downward-api-6552" to be "success or failure" Jun 22 21:54:51.274: INFO: Pod "downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50": Phase="Pending", Reason="", readiness=false. Elapsed: 5.968005ms Jun 22 21:54:53.278: INFO: Pod "downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01018302s Jun 22 21:54:55.281: INFO: Pod "downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013798305s STEP: Saw pod success Jun 22 21:54:55.282: INFO: Pod "downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50" satisfied condition "success or failure" Jun 22 21:54:55.285: INFO: Trying to get logs from node jerma-worker pod downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50 container dapi-container: STEP: delete the pod Jun 22 21:54:55.350: INFO: Waiting for pod downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50 to disappear Jun 22 21:54:55.358: INFO: Pod downward-api-d38b52f9-4bcf-4338-a708-1356eb951a50 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:54:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6552" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:54:55.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:55:06.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3292" for this suite. • [SLOW TEST:11.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":163,"skipped":2678,"failed":0} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:55:06.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jun 22 21:55:10.663: INFO: Pod pod-hostip-43d93452-fffc-49be-b514-8eae16152c0a has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:55:10.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9158" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2678,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:55:10.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 22 21:55:10.767: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490356 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 21:55:10.768: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490356 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 22 21:55:20.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490404 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 21:55:20.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490404 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 22 21:55:30.796: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490436 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 21:55:30.796: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490436 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 22 21:55:40.803: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490468 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 21:55:40.803: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-a fc668baa-1f8b-487a-9bf6-cc0465e2a15c 26490468 0 2020-06-22 21:55:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 22 21:55:50.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-b 6c7f4f97-56cb-4ef9-88e3-669b01da055d 26490499 0 2020-06-22 21:55:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 21:55:50.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-b 6c7f4f97-56cb-4ef9-88e3-669b01da055d 26490499 0 2020-06-22 21:55:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 22 21:56:00.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-b 6c7f4f97-56cb-4ef9-88e3-669b01da055d 26490529 0 2020-06-22 21:55:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 21:56:00.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5612 /api/v1/namespaces/watch-5612/configmaps/e2e-watch-test-configmap-b 6c7f4f97-56cb-4ef9-88e3-669b01da055d 26490529 0 2020-06-22 21:55:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:10.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5612" for this suite. • [SLOW TEST:60.174 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":165,"skipped":2679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:10.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 21:56:14.988: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:15.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8134" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:15.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 21:56:15.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642" in namespace "downward-api-5560" to be "success or failure" Jun 22 21:56:15.167: INFO: Pod "downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642": Phase="Pending", Reason="", readiness=false. Elapsed: 15.764076ms Jun 22 21:56:17.208: INFO: Pod "downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056809711s Jun 22 21:56:19.213: INFO: Pod "downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061403689s STEP: Saw pod success Jun 22 21:56:19.213: INFO: Pod "downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642" satisfied condition "success or failure" Jun 22 21:56:19.238: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642 container client-container: STEP: delete the pod Jun 22 21:56:19.272: INFO: Waiting for pod downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642 to disappear Jun 22 21:56:19.281: INFO: Pod downwardapi-volume-106c766f-8b7c-437b-8bf1-bc2ee11de642 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:19.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5560" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2747,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:19.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-600 STEP: creating replication controller nodeport-test in namespace services-600 I0622 21:56:19.477004 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-600, replica count: 2 I0622 21:56:22.527606 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 21:56:25.527847 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 21:56:25.527: INFO: Creating new exec pod Jun 22 21:56:30.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-600 execpodtzrm5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 22 21:56:30.887: INFO: stderr: "I0622 21:56:30.731722 2000 log.go:172] (0xc000b12790) (0xc000a52000) Create stream\nI0622 21:56:30.731803 2000 log.go:172] (0xc000b12790) (0xc000a52000) Stream added, broadcasting: 1\nI0622 21:56:30.735342 2000 log.go:172] (0xc000b12790) Reply frame received for 1\nI0622 21:56:30.735389 2000 log.go:172] (0xc000b12790) (0xc0009e8000) Create stream\nI0622 21:56:30.735402 2000 log.go:172] (0xc000b12790) (0xc0009e8000) Stream added, broadcasting: 3\nI0622 21:56:30.736441 2000 log.go:172] (0xc000b12790) Reply frame received for 3\nI0622 21:56:30.736487 2000 log.go:172] (0xc000b12790) (0xc0006b3c20) Create stream\nI0622 21:56:30.736500 2000 log.go:172] (0xc000b12790) (0xc0006b3c20) Stream added, broadcasting: 5\nI0622 21:56:30.737485 2000 log.go:172] (0xc000b12790) Reply frame received for 5\nI0622 21:56:30.855640 2000 log.go:172] (0xc000b12790) Data frame received for 5\nI0622 21:56:30.855686 2000 log.go:172] (0xc0006b3c20) (5) Data frame handling\nI0622 21:56:30.855723 2000 log.go:172] (0xc0006b3c20) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0622 21:56:30.876442 2000 log.go:172] (0xc000b12790) Data frame received for 5\nI0622 21:56:30.877266 2000 log.go:172] (0xc0006b3c20) (5) Data frame handling\nI0622 21:56:30.877296 2000 log.go:172] (0xc0006b3c20) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0622 21:56:30.877320 2000 log.go:172] (0xc000b12790) Data frame received for 3\nI0622 21:56:30.877352 2000 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0622 21:56:30.877391 2000 log.go:172] (0xc000b12790) Data frame received for 5\nI0622 21:56:30.877413 2000 log.go:172] (0xc0006b3c20) (5) Data frame handling\nI0622 21:56:30.879152 2000 log.go:172] (0xc000b12790) Data frame received for 1\nI0622 21:56:30.879174 2000 log.go:172] (0xc000a52000) (1) Data frame handling\nI0622 21:56:30.879197 2000 log.go:172] (0xc000a52000) (1) Data frame sent\nI0622 21:56:30.879349 2000 log.go:172] (0xc000b12790) (0xc000a52000) Stream removed, broadcasting: 1\nI0622 21:56:30.879391 2000 log.go:172] (0xc000b12790) Go away received\nI0622 21:56:30.879735 2000 log.go:172] (0xc000b12790) (0xc000a52000) Stream removed, broadcasting: 1\nI0622 21:56:30.879751 2000 log.go:172] (0xc000b12790) (0xc0009e8000) Stream removed, broadcasting: 3\nI0622 21:56:30.879758 2000 log.go:172] (0xc000b12790) (0xc0006b3c20) Stream removed, broadcasting: 5\n" Jun 22 21:56:30.887: INFO: stdout: "" Jun 22 21:56:30.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-600 execpodtzrm5 -- /bin/sh -x -c nc -zv -t -w 2 10.103.23.75 80' Jun 22 21:56:31.092: INFO: stderr: "I0622 21:56:31.009347 2021 log.go:172] (0xc000af6840) (0xc000aee000) Create stream\nI0622 21:56:31.009406 2021 log.go:172] (0xc000af6840) (0xc000aee000) Stream added, broadcasting: 1\nI0622 21:56:31.011349 2021 log.go:172] (0xc000af6840) Reply frame received for 1\nI0622 21:56:31.011380 2021 log.go:172] (0xc000af6840) (0xc000be21e0) Create stream\nI0622 21:56:31.011389 2021 log.go:172] (0xc000af6840) (0xc000be21e0) Stream added, broadcasting: 3\nI0622 21:56:31.012336 2021 log.go:172] (0xc000af6840) Reply frame received for 3\nI0622 21:56:31.012383 2021 log.go:172] (0xc000af6840) (0xc000be2280) Create stream\nI0622 21:56:31.012399 2021 log.go:172] (0xc000af6840) (0xc000be2280) Stream added, broadcasting: 5\nI0622 21:56:31.013483 2021 log.go:172] (0xc000af6840) Reply frame received for 5\nI0622 21:56:31.083453 2021 log.go:172] (0xc000af6840) Data frame received for 5\nI0622 21:56:31.083515 2021 log.go:172] (0xc000be2280) (5) Data frame handling\nI0622 21:56:31.083545 2021 log.go:172] (0xc000be2280) (5) Data frame sent\nI0622 21:56:31.083561 2021 log.go:172] (0xc000af6840) Data frame received for 5\nI0622 21:56:31.083572 2021 log.go:172] (0xc000be2280) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.23.75 80\nConnection to 10.103.23.75 80 port [tcp/http] succeeded!\nI0622 21:56:31.083733 2021 log.go:172] (0xc000af6840) Data frame received for 3\nI0622 21:56:31.083759 2021 log.go:172] (0xc000be21e0) (3) Data frame handling\nI0622 21:56:31.084686 2021 log.go:172] (0xc000af6840) Data frame received for 1\nI0622 21:56:31.084712 2021 log.go:172] (0xc000aee000) (1) Data frame handling\nI0622 21:56:31.084735 2021 log.go:172] (0xc000aee000) (1) Data frame sent\nI0622 21:56:31.084833 2021 log.go:172] (0xc000af6840) (0xc000aee000) Stream removed, broadcasting: 1\nI0622 21:56:31.084973 2021 log.go:172] (0xc000af6840) Go away received\nI0622 21:56:31.085508 2021 log.go:172] (0xc000af6840) (0xc000aee000) Stream removed, broadcasting: 1\nI0622 21:56:31.085532 2021 log.go:172] (0xc000af6840) (0xc000be21e0) Stream removed, broadcasting: 3\nI0622 21:56:31.085545 2021 log.go:172] (0xc000af6840) (0xc000be2280) Stream removed, broadcasting: 5\n" Jun 22 21:56:31.092: INFO: stdout: "" Jun 22 21:56:31.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-600 execpodtzrm5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31131' Jun 22 21:56:31.306: INFO: stderr: "I0622 21:56:31.217474 2043 log.go:172] (0xc000a7a000) (0xc000a4e000) Create stream\nI0622 21:56:31.217528 2043 log.go:172] (0xc000a7a000) (0xc000a4e000) Stream added, broadcasting: 1\nI0622 21:56:31.220099 2043 log.go:172] (0xc000a7a000) Reply frame received for 1\nI0622 21:56:31.220123 2043 log.go:172] (0xc000a7a000) (0xc000a4e0a0) Create stream\nI0622 21:56:31.220129 2043 log.go:172] (0xc000a7a000) (0xc000a4e0a0) Stream added, broadcasting: 3\nI0622 21:56:31.220828 2043 log.go:172] (0xc000a7a000) Reply frame received for 3\nI0622 21:56:31.220865 2043 log.go:172] (0xc000a7a000) (0xc000a4e140) Create stream\nI0622 21:56:31.220876 2043 log.go:172] (0xc000a7a000) (0xc000a4e140) Stream added, broadcasting: 5\nI0622 21:56:31.222020 2043 log.go:172] (0xc000a7a000) Reply frame received for 5\nI0622 21:56:31.298055 2043 log.go:172] (0xc000a7a000) Data frame received for 3\nI0622 21:56:31.298091 2043 log.go:172] (0xc000a4e0a0) (3) Data frame handling\nI0622 21:56:31.298131 2043 log.go:172] (0xc000a7a000) Data frame received for 5\nI0622 21:56:31.298178 2043 log.go:172] (0xc000a4e140) (5) Data frame handling\nI0622 21:56:31.298208 2043 log.go:172] (0xc000a4e140) (5) Data frame sent\nI0622 21:56:31.298227 2043 log.go:172] (0xc000a7a000) Data frame received for 5\nI0622 21:56:31.298242 2043 log.go:172] (0xc000a4e140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31131\nConnection to 172.17.0.10 31131 port [tcp/31131] succeeded!\nI0622 21:56:31.300009 2043 log.go:172] (0xc000a7a000) Data frame received for 1\nI0622 21:56:31.300031 2043 log.go:172] (0xc000a4e000) (1) Data frame handling\nI0622 21:56:31.300043 2043 log.go:172] (0xc000a4e000) (1) Data frame sent\nI0622 21:56:31.300059 2043 log.go:172] (0xc000a7a000) (0xc000a4e000) Stream removed, broadcasting: 1\nI0622 21:56:31.300080 2043 log.go:172] (0xc000a7a000) Go away received\nI0622 21:56:31.300415 2043 log.go:172] (0xc000a7a000) (0xc000a4e000) Stream removed, broadcasting: 1\nI0622 21:56:31.300627 2043 log.go:172] (0xc000a7a000) (0xc000a4e0a0) Stream removed, broadcasting: 3\nI0622 21:56:31.300642 2043 log.go:172] (0xc000a7a000) (0xc000a4e140) Stream removed, broadcasting: 5\n" Jun 22 21:56:31.307: INFO: stdout: "" Jun 22 21:56:31.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-600 execpodtzrm5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31131' Jun 22 21:56:31.542: INFO: stderr: "I0622 21:56:31.449361 2064 log.go:172] (0xc0001091e0) (0xc000699e00) Create stream\nI0622 21:56:31.449413 2064 log.go:172] (0xc0001091e0) (0xc000699e00) Stream added, broadcasting: 1\nI0622 21:56:31.452063 2064 log.go:172] (0xc0001091e0) Reply frame received for 1\nI0622 21:56:31.452114 2064 log.go:172] (0xc0001091e0) (0xc0005f26e0) Create stream\nI0622 21:56:31.452128 2064 log.go:172] (0xc0001091e0) (0xc0005f26e0) Stream added, broadcasting: 3\nI0622 21:56:31.453482 2064 log.go:172] (0xc0001091e0) Reply frame received for 3\nI0622 21:56:31.453521 2064 log.go:172] (0xc0001091e0) (0xc0003d94a0) Create stream\nI0622 21:56:31.453532 2064 log.go:172] (0xc0001091e0) (0xc0003d94a0) Stream added, broadcasting: 5\nI0622 21:56:31.454547 2064 log.go:172] (0xc0001091e0) Reply frame received for 5\nI0622 21:56:31.535777 2064 log.go:172] (0xc0001091e0) Data frame received for 3\nI0622 21:56:31.535801 2064 log.go:172] (0xc0005f26e0) (3) Data frame handling\nI0622 21:56:31.535818 2064 log.go:172] (0xc0001091e0) Data frame received for 5\nI0622 21:56:31.535826 2064 log.go:172] (0xc0003d94a0) (5) Data frame handling\nI0622 21:56:31.535834 2064 log.go:172] (0xc0003d94a0) (5) Data frame sent\nI0622 21:56:31.535840 2064 log.go:172] (0xc0001091e0) Data frame received for 5\nI0622 21:56:31.535846 2064 log.go:172] (0xc0003d94a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31131\nConnection to 172.17.0.8 31131 port [tcp/31131] succeeded!\nI0622 21:56:31.537063 2064 log.go:172] (0xc0001091e0) Data frame received for 1\nI0622 21:56:31.537081 2064 log.go:172] (0xc000699e00) (1) Data frame handling\nI0622 21:56:31.537092 2064 log.go:172] (0xc000699e00) (1) Data frame sent\nI0622 21:56:31.537104 2064 log.go:172] (0xc0001091e0) (0xc000699e00) Stream removed, broadcasting: 1\nI0622 21:56:31.537247 2064 log.go:172] (0xc0001091e0) Go away received\nI0622 21:56:31.537640 2064 log.go:172] (0xc0001091e0) (0xc000699e00) Stream removed, broadcasting: 1\nI0622 21:56:31.537659 2064 log.go:172] (0xc0001091e0) (0xc0005f26e0) Stream removed, broadcasting: 3\nI0622 21:56:31.537668 2064 log.go:172] (0xc0001091e0) (0xc0003d94a0) Stream removed, broadcasting: 5\n" Jun 22 21:56:31.542: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:31.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-600" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.261 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":168,"skipped":2765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:31.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:56:31.639: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.878778ms) Jun 22 21:56:31.643: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.644491ms) Jun 22 21:56:31.646: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.61809ms) Jun 22 21:56:31.651: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.084043ms) Jun 22 21:56:31.654: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.688611ms) Jun 22 21:56:31.658: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.473083ms) Jun 22 21:56:31.697: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 39.077206ms) Jun 22 21:56:31.701: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.337835ms) Jun 22 21:56:31.705: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.883633ms) Jun 22 21:56:31.709: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.259262ms) Jun 22 21:56:31.712: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.954357ms) Jun 22 21:56:31.715: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.801397ms) Jun 22 21:56:31.718: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.171317ms) Jun 22 21:56:31.721: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.933021ms) Jun 22 21:56:31.724: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.789538ms) Jun 22 21:56:31.726: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.775295ms) Jun 22 21:56:31.747: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 20.778355ms) Jun 22 21:56:31.751: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.44108ms) Jun 22 21:56:31.754: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.257443ms) Jun 22 21:56:31.757: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.944907ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4690" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":169,"skipped":2791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:31.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-sftt STEP: Creating a pod to test atomic-volume-subpath Jun 22 21:56:31.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sftt" in namespace "subpath-3542" to be "success or failure" Jun 22 21:56:31.832: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Pending", Reason="", readiness=false. Elapsed: 3.242249ms Jun 22 21:56:33.837: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008258009s Jun 22 21:56:35.841: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 4.01227353s Jun 22 21:56:37.845: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 6.015987462s Jun 22 21:56:39.945: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 8.115677803s Jun 22 21:56:41.949: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 10.120143047s Jun 22 21:56:43.954: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 12.124871663s Jun 22 21:56:45.958: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 14.128634179s Jun 22 21:56:47.962: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 16.132434177s Jun 22 21:56:49.964: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 18.135194757s Jun 22 21:56:51.968: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 20.138800591s Jun 22 21:56:53.972: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Running", Reason="", readiness=true. Elapsed: 22.143175141s Jun 22 21:56:55.976: INFO: Pod "pod-subpath-test-secret-sftt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.147038263s STEP: Saw pod success Jun 22 21:56:55.976: INFO: Pod "pod-subpath-test-secret-sftt" satisfied condition "success or failure" Jun 22 21:56:55.979: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-sftt container test-container-subpath-secret-sftt: STEP: delete the pod Jun 22 21:56:56.113: INFO: Waiting for pod pod-subpath-test-secret-sftt to disappear Jun 22 21:56:56.116: INFO: Pod pod-subpath-test-secret-sftt no longer exists STEP: Deleting pod pod-subpath-test-secret-sftt Jun 22 21:56:56.116: INFO: Deleting pod "pod-subpath-test-secret-sftt" in namespace "subpath-3542" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:56:56.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3542" for this suite. • [SLOW TEST:24.360 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":170,"skipped":2824,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:56:56.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0622 21:57:06.262378 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 21:57:06.262: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:57:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7544" for this suite. • [SLOW TEST:10.145 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":171,"skipped":2831,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:57:06.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:57:17.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-415" for this suite. • [SLOW TEST:11.144 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":172,"skipped":2849,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:57:17.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:57:50.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2871" for this suite. • [SLOW TEST:33.004 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2854,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:57:50.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 22 21:57:50.506: INFO: Waiting up to 5m0s for pod "downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b" in namespace "downward-api-3137" to be "success or failure" Jun 22 21:57:50.544: INFO: Pod "downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.295144ms Jun 22 21:57:52.575: INFO: Pod "downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068893715s Jun 22 21:57:54.579: INFO: Pod "downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073292488s STEP: Saw pod success Jun 22 21:57:54.579: INFO: Pod "downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b" satisfied condition "success or failure" Jun 22 21:57:54.583: INFO: Trying to get logs from node jerma-worker2 pod downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b container dapi-container: STEP: delete the pod Jun 22 21:57:54.613: INFO: Waiting for pod downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b to disappear Jun 22 21:57:54.628: INFO: Pod downward-api-909d56e3-2bd6-47da-99c4-3fa54bb3c97b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:57:54.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3137" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2866,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:57:54.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 21:57:55.140: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 21:57:57.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 21:57:59.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728459875, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 21:58:02.187: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:58:02.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3515-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:03.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9999" for this suite. STEP: Destroying namespace "webhook-9999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.119 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":175,"skipped":2867,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:03.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jun 22 21:58:03.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6388 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 22 21:58:06.718: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0622 21:58:06.614709 2087 log.go:172] (0xc000b4cfd0) (0xc000627ae0) Create stream\nI0622 21:58:06.614766 2087 log.go:172] (0xc000b4cfd0) (0xc000627ae0) Stream added, broadcasting: 1\nI0622 21:58:06.617351 2087 log.go:172] (0xc000b4cfd0) Reply frame received for 1\nI0622 21:58:06.617402 2087 log.go:172] (0xc000b4cfd0) (0xc000aee0a0) Create stream\nI0622 21:58:06.617414 2087 log.go:172] (0xc000b4cfd0) (0xc000aee0a0) Stream added, broadcasting: 3\nI0622 21:58:06.618492 2087 log.go:172] (0xc000b4cfd0) Reply frame received for 3\nI0622 21:58:06.618548 2087 log.go:172] (0xc000b4cfd0) (0xc0007e2000) Create stream\nI0622 21:58:06.618562 2087 log.go:172] (0xc000b4cfd0) (0xc0007e2000) Stream added, broadcasting: 5\nI0622 21:58:06.619383 2087 log.go:172] (0xc000b4cfd0) Reply frame received for 5\nI0622 21:58:06.619415 2087 log.go:172] (0xc000b4cfd0) (0xc000aee140) Create stream\nI0622 21:58:06.619425 2087 log.go:172] (0xc000b4cfd0) (0xc000aee140) Stream added, broadcasting: 7\nI0622 21:58:06.620331 2087 log.go:172] (0xc000b4cfd0) Reply frame received for 7\nI0622 21:58:06.620437 2087 log.go:172] (0xc000aee0a0) (3) Writing data frame\nI0622 21:58:06.620535 2087 log.go:172] (0xc000aee0a0) (3) Writing data frame\nI0622 21:58:06.622030 2087 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0622 21:58:06.622056 2087 log.go:172] (0xc0007e2000) (5) Data frame handling\nI0622 21:58:06.622071 2087 log.go:172] (0xc0007e2000) (5) Data frame sent\nI0622 21:58:06.622434 2087 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0622 21:58:06.622455 2087 log.go:172] (0xc0007e2000) (5) Data frame handling\nI0622 21:58:06.622471 2087 log.go:172] (0xc0007e2000) (5) Data frame sent\nI0622 21:58:06.671249 2087 log.go:172] (0xc000b4cfd0) Data frame received for 7\nI0622 21:58:06.671452 2087 log.go:172] (0xc000aee140) (7) Data frame handling\nI0622 21:58:06.671535 2087 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0622 21:58:06.671573 2087 log.go:172] (0xc0007e2000) (5) Data frame handling\nI0622 21:58:06.671622 2087 log.go:172] (0xc000b4cfd0) (0xc000aee0a0) Stream removed, broadcasting: 3\nI0622 21:58:06.671691 2087 log.go:172] (0xc000b4cfd0) Data frame received for 1\nI0622 21:58:06.671719 2087 log.go:172] (0xc000627ae0) (1) Data frame handling\nI0622 21:58:06.671755 2087 log.go:172] (0xc000627ae0) (1) Data frame sent\nI0622 21:58:06.671785 2087 log.go:172] (0xc000b4cfd0) (0xc000627ae0) Stream removed, broadcasting: 1\nI0622 21:58:06.671840 2087 log.go:172] (0xc000b4cfd0) Go away received\nI0622 21:58:06.672462 2087 log.go:172] (0xc000b4cfd0) (0xc000627ae0) Stream removed, broadcasting: 1\nI0622 21:58:06.672498 2087 log.go:172] (0xc000b4cfd0) (0xc000aee0a0) Stream removed, broadcasting: 3\nI0622 21:58:06.672517 2087 log.go:172] (0xc000b4cfd0) (0xc0007e2000) Stream removed, broadcasting: 5\nI0622 21:58:06.672547 2087 log.go:172] (0xc000b4cfd0) (0xc000aee140) Stream removed, broadcasting: 7\n" Jun 22 21:58:06.718: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:08.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6388" for this suite. • [SLOW TEST:5.023 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":176,"skipped":2874,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:08.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:08.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6194" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":177,"skipped":2879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:08.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3779 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3779 STEP: Creating statefulset with conflicting port in namespace statefulset-3779 STEP: Waiting until pod test-pod will start running in namespace statefulset-3779 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3779 Jun 22 21:58:15.026: INFO: Observed stateful pod in namespace: statefulset-3779, name: ss-0, uid: 00f09341-b177-461f-b722-10dd186b10b0, status phase: Failed. Waiting for statefulset controller to delete. Jun 22 21:58:15.039: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3779 STEP: Removing pod with conflicting port in namespace statefulset-3779 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3779 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 21:58:21.117: INFO: Deleting all statefulset in ns statefulset-3779 Jun 22 21:58:21.120: INFO: Scaling statefulset ss to 0 Jun 22 21:58:31.139: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 21:58:31.142: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:31.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3779" for this suite. • [SLOW TEST:22.304 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":178,"skipped":2910,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:31.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-57916f0a-d1cc-4836-a0ed-1f0660d431d3 STEP: Creating a pod to test consume secrets Jun 22 21:58:31.247: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da" in namespace "projected-2482" to be "success or failure" Jun 22 21:58:31.267: INFO: Pod "pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da": Phase="Pending", Reason="", readiness=false. Elapsed: 19.862202ms Jun 22 21:58:33.270: INFO: Pod "pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023264883s Jun 22 21:58:35.274: INFO: Pod "pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027185993s STEP: Saw pod success Jun 22 21:58:35.274: INFO: Pod "pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da" satisfied condition "success or failure" Jun 22 21:58:35.277: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da container secret-volume-test: STEP: delete the pod Jun 22 21:58:35.312: INFO: Waiting for pod pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da to disappear Jun 22 21:58:35.316: INFO: Pod pod-projected-secrets-fa855e1c-1f57-4366-a6a4-b721e43ed8da no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:35.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2482" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2918,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:35.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-ce936ca6-d58d-435b-a970-4a3c952d1604 STEP: Creating a pod to test consume secrets Jun 22 21:58:35.445: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60" in namespace "projected-1133" to be "success or failure" Jun 22 21:58:35.467: INFO: Pod "pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60": Phase="Pending", Reason="", readiness=false. Elapsed: 21.435482ms Jun 22 21:58:37.470: INFO: Pod "pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024834321s Jun 22 21:58:39.475: INFO: Pod "pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029140028s STEP: Saw pod success Jun 22 21:58:39.475: INFO: Pod "pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60" satisfied condition "success or failure" Jun 22 21:58:39.478: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60 container projected-secret-volume-test: STEP: delete the pod Jun 22 21:58:39.496: INFO: Waiting for pod pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60 to disappear Jun 22 21:58:39.514: INFO: Pod pod-projected-secrets-d79d055b-239c-4687-8d5f-31340a6f8b60 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:58:39.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1133" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2918,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:58:39.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:58:39.649: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 22 21:58:39.681: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:39.685: INFO: Number of nodes with available pods: 0 Jun 22 21:58:39.685: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:40.690: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:40.694: INFO: Number of nodes with available pods: 0 Jun 22 21:58:40.694: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:41.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:41.717: INFO: Number of nodes with available pods: 0 Jun 22 21:58:41.717: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:42.708: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:42.712: INFO: Number of nodes with available pods: 0 Jun 22 21:58:42.712: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:43.703: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:43.706: INFO: Number of nodes with available pods: 2 Jun 22 21:58:43.706: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 22 21:58:43.745: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:43.745: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:43.764: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:44.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:44.768: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:44.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:45.769: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:45.769: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:45.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:46.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:46.768: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:46.768: INFO: Pod daemon-set-kfkxl is not available Jun 22 21:58:46.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:47.769: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:47.769: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:47.769: INFO: Pod daemon-set-kfkxl is not available Jun 22 21:58:47.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:48.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:48.768: INFO: Wrong image for pod: daemon-set-kfkxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:48.768: INFO: Pod daemon-set-kfkxl is not available Jun 22 21:58:48.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:49.769: INFO: Pod daemon-set-6j6f7 is not available Jun 22 21:58:49.769: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:49.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:50.768: INFO: Pod daemon-set-6j6f7 is not available Jun 22 21:58:50.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:50.779: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:51.768: INFO: Pod daemon-set-6j6f7 is not available Jun 22 21:58:51.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:51.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:52.769: INFO: Pod daemon-set-6j6f7 is not available Jun 22 21:58:52.769: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:52.773: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:53.768: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:53.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:54.773: INFO: Wrong image for pod: daemon-set-f478t. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 22 21:58:54.773: INFO: Pod daemon-set-f478t is not available Jun 22 21:58:54.777: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:55.834: INFO: Pod daemon-set-v844m is not available Jun 22 21:58:55.840: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 22 21:58:55.844: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:55.851: INFO: Number of nodes with available pods: 1 Jun 22 21:58:55.851: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:56.856: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:56.859: INFO: Number of nodes with available pods: 1 Jun 22 21:58:56.859: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:57.855: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:57.857: INFO: Number of nodes with available pods: 1 Jun 22 21:58:57.857: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:58:58.856: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:58:58.860: INFO: Number of nodes with available pods: 2 Jun 22 21:58:58.860: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6994, will wait for the garbage collector to delete the pods Jun 22 21:58:58.958: INFO: Deleting DaemonSet.extensions daemon-set took: 7.013898ms Jun 22 21:58:59.360: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.214256ms Jun 22 21:59:09.263: INFO: Number of nodes with available pods: 0 Jun 22 21:59:09.263: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 21:59:09.265: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6994/daemonsets","resourceVersion":"26491795"},"items":null} Jun 22 21:59:09.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6994/pods","resourceVersion":"26491795"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:09.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6994" for this suite. • [SLOW TEST:29.762 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":181,"skipped":2927,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:09.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jun 22 21:59:09.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 22 21:59:09.594: INFO: stderr: "" Jun 22 21:59:09.594: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:09.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8672" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":182,"skipped":2932,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:09.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 21:59:09.703: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:09.710: INFO: Number of nodes with available pods: 0 Jun 22 21:59:09.710: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:10.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:10.784: INFO: Number of nodes with available pods: 0 Jun 22 21:59:10.784: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:11.716: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:11.720: INFO: Number of nodes with available pods: 0 Jun 22 21:59:11.720: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:12.781: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:12.784: INFO: Number of nodes with available pods: 0 Jun 22 21:59:12.784: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:13.716: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:13.720: INFO: Number of nodes with available pods: 1 Jun 22 21:59:13.720: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:14.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:14.719: INFO: Number of nodes with available pods: 2 Jun 22 21:59:14.719: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 22 21:59:14.744: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:14.746: INFO: Number of nodes with available pods: 1 Jun 22 21:59:14.746: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:15.765: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:15.767: INFO: Number of nodes with available pods: 1 Jun 22 21:59:15.767: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:16.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:16.779: INFO: Number of nodes with available pods: 1 Jun 22 21:59:16.779: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:17.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:17.755: INFO: Number of nodes with available pods: 1 Jun 22 21:59:17.755: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:18.751: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:18.754: INFO: Number of nodes with available pods: 1 Jun 22 21:59:18.754: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:19.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:19.755: INFO: Number of nodes with available pods: 1 Jun 22 21:59:19.755: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:20.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:20.755: INFO: Number of nodes with available pods: 1 Jun 22 21:59:20.755: INFO: Node jerma-worker is running more than one daemon pod Jun 22 21:59:21.751: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 21:59:21.755: INFO: Number of nodes with available pods: 2 Jun 22 21:59:21.755: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3966, will wait for the garbage collector to delete the pods Jun 22 21:59:21.815: INFO: Deleting DaemonSet.extensions daemon-set took: 5.998158ms Jun 22 21:59:21.916: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.445322ms Jun 22 21:59:29.318: INFO: Number of nodes with available pods: 0 Jun 22 21:59:29.318: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 21:59:29.321: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3966/daemonsets","resourceVersion":"26491951"},"items":null} Jun 22 21:59:29.323: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3966/pods","resourceVersion":"26491951"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:29.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3966" for this suite. • [SLOW TEST:19.737 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":183,"skipped":2936,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:29.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 21:59:29.411: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c7de9890-0693-407f-b3e7-779e40c35ef6" in namespace "security-context-test-5602" to be "success or failure" Jun 22 21:59:29.417: INFO: Pod "alpine-nnp-false-c7de9890-0693-407f-b3e7-779e40c35ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596689ms Jun 22 21:59:31.421: INFO: Pod "alpine-nnp-false-c7de9890-0693-407f-b3e7-779e40c35ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010194652s Jun 22 21:59:33.433: INFO: Pod "alpine-nnp-false-c7de9890-0693-407f-b3e7-779e40c35ef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022293204s Jun 22 21:59:33.433: INFO: Pod "alpine-nnp-false-c7de9890-0693-407f-b3e7-779e40c35ef6" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:33.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5602" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:33.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 22 21:59:33.536: INFO: Waiting up to 5m0s for pod "downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846" in namespace "downward-api-9981" to be "success or failure" Jun 22 21:59:33.538: INFO: Pod "downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261734ms Jun 22 21:59:35.543: INFO: Pod "downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006884077s Jun 22 21:59:37.547: INFO: Pod "downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01058219s STEP: Saw pod success Jun 22 21:59:37.547: INFO: Pod "downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846" satisfied condition "success or failure" Jun 22 21:59:37.550: INFO: Trying to get logs from node jerma-worker pod downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846 container dapi-container: STEP: delete the pod Jun 22 21:59:37.577: INFO: Waiting for pod downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846 to disappear Jun 22 21:59:37.606: INFO: Pod downward-api-484292b4-a28e-4c4d-a1a2-49963ec5a846 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:37.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9981" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2967,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:37.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9027.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9027.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 21:59:43.720: INFO: DNS probes using dns-9027/dns-test-d6f6e509-e8dc-4f3f-b2e9-4985d4722327 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 21:59:43.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9027" for this suite. • [SLOW TEST:6.161 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":186,"skipped":2979,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 21:59:43.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-1c8eb4f1-9f24-4fde-af91-bb9ce81defd0 in namespace container-probe-6023 Jun 22 21:59:47.928: INFO: Started pod busybox-1c8eb4f1-9f24-4fde-af91-bb9ce81defd0 in namespace container-probe-6023 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 21:59:47.931: INFO: Initial restart count of pod busybox-1c8eb4f1-9f24-4fde-af91-bb9ce81defd0 is 0 Jun 22 22:00:38.071: INFO: Restart count of pod container-probe-6023/busybox-1c8eb4f1-9f24-4fde-af91-bb9ce81defd0 is now 1 (50.139966491s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:00:38.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6023" for this suite. • [SLOW TEST:54.355 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2995,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:00:38.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 22:00:38.198: INFO: Waiting up to 5m0s for pod "pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045" in namespace "emptydir-4413" to be "success or failure" Jun 22 22:00:38.201: INFO: Pod "pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773619ms Jun 22 22:00:40.226: INFO: Pod "pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028652402s Jun 22 22:00:42.231: INFO: Pod "pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033196681s STEP: Saw pod success Jun 22 22:00:42.231: INFO: Pod "pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045" satisfied condition "success or failure" Jun 22 22:00:42.234: INFO: Trying to get logs from node jerma-worker2 pod pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045 container test-container: STEP: delete the pod Jun 22 22:00:42.291: INFO: Waiting for pod pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045 to disappear Jun 22 22:00:42.299: INFO: Pod pod-c5dac1e5-3ad7-47d9-8df6-7ccb8ee24045 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:00:42.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4413" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2997,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:00:42.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d1f5b215-76b3-43a6-9760-14f5e1387344 STEP: Creating a pod to test consume secrets Jun 22 22:00:42.463: INFO: Waiting up to 5m0s for pod "pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c" in namespace "secrets-1574" to be "success or failure" Jun 22 22:00:42.473: INFO: Pod "pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.973313ms Jun 22 22:00:44.477: INFO: Pod "pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013974891s Jun 22 22:00:46.480: INFO: Pod "pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017535983s STEP: Saw pod success Jun 22 22:00:46.481: INFO: Pod "pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c" satisfied condition "success or failure" Jun 22 22:00:46.483: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c container secret-volume-test: STEP: delete the pod Jun 22 22:00:46.516: INFO: Waiting for pod pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c to disappear Jun 22 22:00:46.559: INFO: Pod pod-secrets-a0770c4d-d4ce-4ca6-b183-94497b9a981c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:00:46.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1574" for this suite. STEP: Destroying namespace "secret-namespace-5996" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:00:46.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 22:00:51.227: INFO: Successfully updated pod "pod-update-95ff8fc1-ed93-4d0d-a194-5e7c88ea6d74" STEP: verifying the updated pod is in kubernetes Jun 22 22:00:51.255: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:00:51.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2042" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:00:51.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4786 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4786 I0622 22:00:51.432428 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4786, replica count: 2 I0622 22:00:54.482934 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 22:00:57.483153 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 22:00:57.483: INFO: Creating new exec pod Jun 22 22:01:02.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4786 execpodl65xt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 22 22:01:02.751: INFO: stderr: "I0622 22:01:02.648534 2131 log.go:172] (0xc000a9c0b0) (0xc00091e0a0) Create stream\nI0622 22:01:02.648589 2131 log.go:172] (0xc000a9c0b0) (0xc00091e0a0) Stream added, broadcasting: 1\nI0622 22:01:02.650786 2131 log.go:172] (0xc000a9c0b0) Reply frame received for 1\nI0622 22:01:02.650829 2131 log.go:172] (0xc000a9c0b0) (0xc000a88000) Create stream\nI0622 22:01:02.650851 2131 log.go:172] (0xc000a9c0b0) (0xc000a88000) Stream added, broadcasting: 3\nI0622 22:01:02.651606 2131 log.go:172] (0xc000a9c0b0) Reply frame received for 3\nI0622 22:01:02.651649 2131 log.go:172] (0xc000a9c0b0) (0xc000a880a0) Create stream\nI0622 22:01:02.651661 2131 log.go:172] (0xc000a9c0b0) (0xc000a880a0) Stream added, broadcasting: 5\nI0622 22:01:02.652443 2131 log.go:172] (0xc000a9c0b0) Reply frame received for 5\nI0622 22:01:02.742511 2131 log.go:172] (0xc000a9c0b0) Data frame received for 5\nI0622 22:01:02.742549 2131 log.go:172] (0xc000a880a0) (5) Data frame handling\nI0622 22:01:02.742572 2131 log.go:172] (0xc000a880a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0622 22:01:02.742628 2131 log.go:172] (0xc000a9c0b0) Data frame received for 5\nI0622 22:01:02.742643 2131 log.go:172] (0xc000a880a0) (5) Data frame handling\nI0622 22:01:02.742650 2131 log.go:172] (0xc000a880a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0622 22:01:02.743181 2131 log.go:172] (0xc000a9c0b0) Data frame received for 5\nI0622 22:01:02.743210 2131 log.go:172] (0xc000a880a0) (5) Data frame handling\nI0622 22:01:02.743233 2131 log.go:172] (0xc000a9c0b0) Data frame received for 3\nI0622 22:01:02.743243 2131 log.go:172] (0xc000a88000) (3) Data frame handling\nI0622 22:01:02.744829 2131 log.go:172] (0xc000a9c0b0) Data frame received for 1\nI0622 22:01:02.744853 2131 log.go:172] (0xc00091e0a0) (1) Data frame handling\nI0622 22:01:02.744874 2131 log.go:172] (0xc00091e0a0) (1) Data frame sent\nI0622 22:01:02.744891 2131 log.go:172] (0xc000a9c0b0) (0xc00091e0a0) Stream removed, broadcasting: 1\nI0622 22:01:02.745030 2131 log.go:172] (0xc000a9c0b0) Go away received\nI0622 22:01:02.745320 2131 log.go:172] (0xc000a9c0b0) (0xc00091e0a0) Stream removed, broadcasting: 1\nI0622 22:01:02.745339 2131 log.go:172] (0xc000a9c0b0) (0xc000a88000) Stream removed, broadcasting: 3\nI0622 22:01:02.745349 2131 log.go:172] (0xc000a9c0b0) (0xc000a880a0) Stream removed, broadcasting: 5\n" Jun 22 22:01:02.751: INFO: stdout: "" Jun 22 22:01:02.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4786 execpodl65xt -- /bin/sh -x -c nc -zv -t -w 2 10.103.21.59 80' Jun 22 22:01:02.967: INFO: stderr: "I0622 22:01:02.876963 2152 log.go:172] (0xc000970f20) (0xc000671e00) Create stream\nI0622 22:01:02.877016 2152 log.go:172] (0xc000970f20) (0xc000671e00) Stream added, broadcasting: 1\nI0622 22:01:02.882383 2152 log.go:172] (0xc000970f20) Reply frame received for 1\nI0622 22:01:02.882426 2152 log.go:172] (0xc000970f20) (0xc000746aa0) Create stream\nI0622 22:01:02.882442 2152 log.go:172] (0xc000970f20) (0xc000746aa0) Stream added, broadcasting: 3\nI0622 22:01:02.883509 2152 log.go:172] (0xc000970f20) Reply frame received for 3\nI0622 22:01:02.883531 2152 log.go:172] (0xc000970f20) (0xc000671b80) Create stream\nI0622 22:01:02.883539 2152 log.go:172] (0xc000970f20) (0xc000671b80) Stream added, broadcasting: 5\nI0622 22:01:02.884495 2152 log.go:172] (0xc000970f20) Reply frame received for 5\nI0622 22:01:02.961325 2152 log.go:172] (0xc000970f20) Data frame received for 5\nI0622 22:01:02.961345 2152 log.go:172] (0xc000671b80) (5) Data frame handling\nI0622 22:01:02.961354 2152 log.go:172] (0xc000671b80) (5) Data frame sent\nI0622 22:01:02.961359 2152 log.go:172] (0xc000970f20) Data frame received for 5\nI0622 22:01:02.961364 2152 log.go:172] (0xc000671b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.21.59 80\nConnection to 10.103.21.59 80 port [tcp/http] succeeded!\nI0622 22:01:02.961385 2152 log.go:172] (0xc000970f20) Data frame received for 3\nI0622 22:01:02.961389 2152 log.go:172] (0xc000746aa0) (3) Data frame handling\nI0622 22:01:02.962740 2152 log.go:172] (0xc000970f20) Data frame received for 1\nI0622 22:01:02.962753 2152 log.go:172] (0xc000671e00) (1) Data frame handling\nI0622 22:01:02.962760 2152 log.go:172] (0xc000671e00) (1) Data frame sent\nI0622 22:01:02.962769 2152 log.go:172] (0xc000970f20) (0xc000671e00) Stream removed, broadcasting: 1\nI0622 22:01:02.962778 2152 log.go:172] (0xc000970f20) Go away received\nI0622 22:01:02.963129 2152 log.go:172] (0xc000970f20) (0xc000671e00) Stream removed, broadcasting: 1\nI0622 22:01:02.963157 2152 log.go:172] (0xc000970f20) (0xc000746aa0) Stream removed, broadcasting: 3\nI0622 22:01:02.963173 2152 log.go:172] (0xc000970f20) (0xc000671b80) Stream removed, broadcasting: 5\n" Jun 22 22:01:02.967: INFO: stdout: "" Jun 22 22:01:02.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4786 execpodl65xt -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30493' Jun 22 22:01:03.199: INFO: stderr: "I0622 22:01:03.118746 2174 log.go:172] (0xc000ad6000) (0xc00058fc20) Create stream\nI0622 22:01:03.118825 2174 log.go:172] (0xc000ad6000) (0xc00058fc20) Stream added, broadcasting: 1\nI0622 22:01:03.122298 2174 log.go:172] (0xc000ad6000) Reply frame received for 1\nI0622 22:01:03.122349 2174 log.go:172] (0xc000ad6000) (0xc000a94000) Create stream\nI0622 22:01:03.122369 2174 log.go:172] (0xc000ad6000) (0xc000a94000) Stream added, broadcasting: 3\nI0622 22:01:03.123281 2174 log.go:172] (0xc000ad6000) Reply frame received for 3\nI0622 22:01:03.123324 2174 log.go:172] (0xc000ad6000) (0xc00058fcc0) Create stream\nI0622 22:01:03.123337 2174 log.go:172] (0xc000ad6000) (0xc00058fcc0) Stream added, broadcasting: 5\nI0622 22:01:03.124237 2174 log.go:172] (0xc000ad6000) Reply frame received for 5\nI0622 22:01:03.189416 2174 log.go:172] (0xc000ad6000) Data frame received for 5\nI0622 22:01:03.189454 2174 log.go:172] (0xc00058fcc0) (5) Data frame handling\nI0622 22:01:03.189493 2174 log.go:172] (0xc00058fcc0) (5) Data frame sent\nI0622 22:01:03.189511 2174 log.go:172] (0xc000ad6000) Data frame received for 3\n+ nc -zv -t -w 2 172.17.0.10 30493\nConnection to 172.17.0.10 30493 port [tcp/30493] succeeded!\nI0622 22:01:03.189540 2174 log.go:172] (0xc000a94000) (3) Data frame handling\nI0622 22:01:03.189557 2174 log.go:172] (0xc000ad6000) Data frame received for 5\nI0622 22:01:03.189575 2174 log.go:172] (0xc00058fcc0) (5) Data frame handling\nI0622 22:01:03.191300 2174 log.go:172] (0xc000ad6000) Data frame received for 1\nI0622 22:01:03.191337 2174 log.go:172] (0xc00058fc20) (1) Data frame handling\nI0622 22:01:03.191368 2174 log.go:172] (0xc00058fc20) (1) Data frame sent\nI0622 22:01:03.191395 2174 log.go:172] (0xc000ad6000) (0xc00058fc20) Stream removed, broadcasting: 1\nI0622 22:01:03.191424 2174 log.go:172] (0xc000ad6000) Go away received\nI0622 22:01:03.192029 2174 log.go:172] (0xc000ad6000) (0xc00058fc20) Stream removed, broadcasting: 1\nI0622 22:01:03.192058 2174 log.go:172] (0xc000ad6000) (0xc000a94000) Stream removed, broadcasting: 3\nI0622 22:01:03.192081 2174 log.go:172] (0xc000ad6000) (0xc00058fcc0) Stream removed, broadcasting: 5\n" Jun 22 22:01:03.199: INFO: stdout: "" Jun 22 22:01:03.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4786 execpodl65xt -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30493' Jun 22 22:01:03.428: INFO: stderr: "I0622 22:01:03.328869 2195 log.go:172] (0xc0000f42c0) (0xc0003f7540) Create stream\nI0622 22:01:03.328948 2195 log.go:172] (0xc0000f42c0) (0xc0003f7540) Stream added, broadcasting: 1\nI0622 22:01:03.331097 2195 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0622 22:01:03.331143 2195 log.go:172] (0xc0000f42c0) (0xc000811ae0) Create stream\nI0622 22:01:03.331160 2195 log.go:172] (0xc0000f42c0) (0xc000811ae0) Stream added, broadcasting: 3\nI0622 22:01:03.332230 2195 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0622 22:01:03.332280 2195 log.go:172] (0xc0000f42c0) (0xc000910000) Create stream\nI0622 22:01:03.332296 2195 log.go:172] (0xc0000f42c0) (0xc000910000) Stream added, broadcasting: 5\nI0622 22:01:03.333727 2195 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0622 22:01:03.419868 2195 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0622 22:01:03.419920 2195 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0622 22:01:03.419952 2195 log.go:172] (0xc000811ae0) (3) Data frame handling\nI0622 22:01:03.419977 2195 log.go:172] (0xc000910000) (5) Data frame handling\nI0622 22:01:03.419990 2195 log.go:172] (0xc000910000) (5) Data frame sent\nI0622 22:01:03.420001 2195 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0622 22:01:03.420016 2195 log.go:172] (0xc000910000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30493\nConnection to 172.17.0.8 30493 port [tcp/30493] succeeded!\nI0622 22:01:03.421905 2195 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0622 22:01:03.421946 2195 log.go:172] (0xc0003f7540) (1) Data frame handling\nI0622 22:01:03.421974 2195 log.go:172] (0xc0003f7540) (1) Data frame sent\nI0622 22:01:03.422000 2195 log.go:172] (0xc0000f42c0) (0xc0003f7540) Stream removed, broadcasting: 1\nI0622 22:01:03.422185 2195 log.go:172] (0xc0000f42c0) Go away received\nI0622 22:01:03.422539 2195 log.go:172] (0xc0000f42c0) (0xc0003f7540) Stream removed, broadcasting: 1\nI0622 22:01:03.422580 2195 log.go:172] (0xc0000f42c0) (0xc000811ae0) Stream removed, broadcasting: 3\nI0622 22:01:03.422606 2195 log.go:172] (0xc0000f42c0) (0xc000910000) Stream removed, broadcasting: 5\n" Jun 22 22:01:03.428: INFO: stdout: "" Jun 22 22:01:03.428: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:03.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4786" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.218 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":191,"skipped":3071,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:03.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:01:03.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:01:05.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460064, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460063, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:01:09.022: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 22 22:01:13.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1752 to-be-attached-pod -i -c=container1' Jun 22 22:01:13.209: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:13.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1752" for this suite. STEP: Destroying namespace "webhook-1752-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.830 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":192,"skipped":3085,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:13.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:01:13.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe" in namespace "downward-api-2862" to be "success or failure" Jun 22 22:01:13.387: INFO: Pod "downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.254591ms Jun 22 22:01:15.391: INFO: Pod "downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007198679s Jun 22 22:01:17.395: INFO: Pod "downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010657407s STEP: Saw pod success Jun 22 22:01:17.395: INFO: Pod "downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe" satisfied condition "success or failure" Jun 22 22:01:17.397: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe container client-container: STEP: delete the pod Jun 22 22:01:17.441: INFO: Waiting for pod downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe to disappear Jun 22 22:01:17.466: INFO: Pod downwardapi-volume-87dc6d08-e50f-4ed2-b268-d88f251e13fe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2862" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:17.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 22 22:01:25.170: INFO: 0 pods remaining Jun 22 22:01:25.171: INFO: 0 pods has nil DeletionTimestamp Jun 22 22:01:25.171: INFO: Jun 22 22:01:26.148: INFO: 0 pods remaining Jun 22 22:01:26.148: INFO: 0 pods has nil DeletionTimestamp Jun 22 22:01:26.148: INFO: STEP: Gathering metrics W0622 22:01:26.911187 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 22:01:26.911: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1914" for this suite. • [SLOW TEST:9.445 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":194,"skipped":3126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:26.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:01:27.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d" in namespace "downward-api-3496" to be "success or failure" Jun 22 22:01:27.834: INFO: Pod "downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 227.681459ms Jun 22 22:01:30.285: INFO: Pod "downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.67902271s Jun 22 22:01:32.315: INFO: Pod "downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.708829459s STEP: Saw pod success Jun 22 22:01:32.315: INFO: Pod "downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d" satisfied condition "success or failure" Jun 22 22:01:32.325: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d container client-container: STEP: delete the pod Jun 22 22:01:32.619: INFO: Waiting for pod downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d to disappear Jun 22 22:01:32.667: INFO: Pod downwardapi-volume-91c5b1c6-f1ba-4862-8484-f83d1d785e7d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:32.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3496" for this suite. • [SLOW TEST:5.768 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3151,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:32.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:45.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7142" for this suite. • [SLOW TEST:13.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":196,"skipped":3153,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:45.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 22 22:01:46.023: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:01:52.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5316" for this suite. • [SLOW TEST:6.249 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":197,"skipped":3156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:01:52.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:08.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2686" for this suite. • [SLOW TEST:16.486 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":198,"skipped":3197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:08.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 22 22:02:12.802: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5920 PodName:pod-sharedvolume-c82c9977-361b-4440-84b8-1b4d591d5102 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:02:12.802: INFO: >>> kubeConfig: /root/.kube/config I0622 22:02:12.834080 6 log.go:172] (0xc002490e70) (0xc000cf1680) Create stream I0622 22:02:12.834140 6 log.go:172] (0xc002490e70) (0xc000cf1680) Stream added, broadcasting: 1 I0622 22:02:12.836569 6 log.go:172] (0xc002490e70) Reply frame received for 1 I0622 22:02:12.836636 6 log.go:172] (0xc002490e70) (0xc0014b8000) Create stream I0622 22:02:12.836665 6 log.go:172] (0xc002490e70) (0xc0014b8000) Stream added, broadcasting: 3 I0622 22:02:12.837999 6 log.go:172] (0xc002490e70) Reply frame received for 3 I0622 22:02:12.838056 6 log.go:172] (0xc002490e70) (0xc0014b80a0) Create stream I0622 22:02:12.838073 6 log.go:172] (0xc002490e70) (0xc0014b80a0) Stream added, broadcasting: 5 I0622 22:02:12.839006 6 log.go:172] (0xc002490e70) Reply frame received for 5 I0622 22:02:12.889454 6 log.go:172] (0xc002490e70) Data frame received for 5 I0622 22:02:12.889478 6 log.go:172] (0xc0014b80a0) (5) Data frame handling I0622 22:02:12.889538 6 log.go:172] (0xc002490e70) Data frame received for 3 I0622 22:02:12.889577 6 log.go:172] (0xc0014b8000) (3) Data frame handling I0622 22:02:12.889616 6 log.go:172] (0xc0014b8000) (3) Data frame sent I0622 22:02:12.889631 6 log.go:172] (0xc002490e70) Data frame received for 3 I0622 22:02:12.889645 6 log.go:172] (0xc0014b8000) (3) Data frame handling I0622 22:02:12.891286 6 log.go:172] (0xc002490e70) Data frame received for 1 I0622 22:02:12.891315 6 log.go:172] (0xc000cf1680) (1) Data frame handling I0622 22:02:12.891338 6 log.go:172] (0xc000cf1680) (1) Data frame sent I0622 22:02:12.891376 6 log.go:172] (0xc002490e70) (0xc000cf1680) Stream removed, broadcasting: 1 I0622 22:02:12.891421 6 log.go:172] (0xc002490e70) Go away received I0622 22:02:12.891491 6 log.go:172] (0xc002490e70) (0xc000cf1680) Stream removed, broadcasting: 1 I0622 22:02:12.891519 6 log.go:172] (0xc002490e70) (0xc0014b8000) Stream removed, broadcasting: 3 I0622 22:02:12.891531 6 log.go:172] (0xc002490e70) (0xc0014b80a0) Stream removed, broadcasting: 5 Jun 22 22:02:12.891: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:12.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5920" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":199,"skipped":3231,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:12.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 22 22:02:12.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 22:02:12.991: INFO: Waiting for terminating namespaces to be deleted... Jun 22 22:02:12.994: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 22 22:02:12.999: INFO: pod-sharedvolume-c82c9977-361b-4440-84b8-1b4d591d5102 from emptydir-5920 started at 2020-06-22 22:02:08 +0000 UTC (2 container statuses recorded) Jun 22 22:02:12.999: INFO: Container busybox-main-container ready: true, restart count 0 Jun 22 22:02:12.999: INFO: Container busybox-sub-container ready: true, restart count 0 Jun 22 22:02:12.999: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 22:02:12.999: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 22:02:12.999: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 22:02:12.999: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 22:02:12.999: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 22 22:02:13.025: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 22:02:13.025: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 22:02:13.025: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 22 22:02:13.025: INFO: Container kube-hunter ready: false, restart count 0 Jun 22 22:02:13.025: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 22 22:02:13.025: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 22:02:13.025: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 22 22:02:13.025: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161afcc315ab844f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:14.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9466" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":200,"skipped":3240,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:14.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 22:02:18.235: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:18.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4844" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:18.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jun 22 22:02:18.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 22 22:02:18.452: INFO: stderr: "" Jun 22 22:02:18.452: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:18.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8505" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":202,"skipped":3302,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:18.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d4d883e8-d4cb-412a-ab89-0017ef6584c8 STEP: Creating a pod to test consume secrets Jun 22 22:02:18.601: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea" in namespace "projected-8872" to be "success or failure" Jun 22 22:02:18.606: INFO: Pod "pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337202ms Jun 22 22:02:20.663: INFO: Pod "pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061247022s Jun 22 22:02:22.667: INFO: Pod "pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066232996s STEP: Saw pod success Jun 22 22:02:22.668: INFO: Pod "pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea" satisfied condition "success or failure" Jun 22 22:02:22.671: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea container projected-secret-volume-test: STEP: delete the pod Jun 22 22:02:22.687: INFO: Waiting for pod pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea to disappear Jun 22 22:02:22.697: INFO: Pod pod-projected-secrets-bd0676ea-c903-49c7-b58f-790258ce9aea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:22.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8872" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3309,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:22.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8167 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 22:02:22.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 22:02:50.879: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:8080/dial?request=hostname&protocol=udp&host=10.244.1.13&port=8081&tries=1'] Namespace:pod-network-test-8167 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:02:50.879: INFO: >>> kubeConfig: /root/.kube/config I0622 22:02:50.916157 6 log.go:172] (0xc0024916b0) (0xc0027e65a0) Create stream I0622 22:02:50.916181 6 log.go:172] (0xc0024916b0) (0xc0027e65a0) Stream added, broadcasting: 1 I0622 22:02:50.918855 6 log.go:172] (0xc0024916b0) Reply frame received for 1 I0622 22:02:50.918892 6 log.go:172] (0xc0024916b0) (0xc0027e6640) Create stream I0622 22:02:50.918902 6 log.go:172] (0xc0024916b0) (0xc0027e6640) Stream added, broadcasting: 3 I0622 22:02:50.919794 6 log.go:172] (0xc0024916b0) Reply frame received for 3 I0622 22:02:50.919824 6 log.go:172] (0xc0024916b0) (0xc00222b040) Create stream I0622 22:02:50.919835 6 log.go:172] (0xc0024916b0) (0xc00222b040) Stream added, broadcasting: 5 I0622 22:02:50.920788 6 log.go:172] (0xc0024916b0) Reply frame received for 5 I0622 22:02:51.063224 6 log.go:172] (0xc0024916b0) Data frame received for 3 I0622 22:02:51.063270 6 log.go:172] (0xc0027e6640) (3) Data frame handling I0622 22:02:51.063296 6 log.go:172] (0xc0027e6640) (3) Data frame sent I0622 22:02:51.064300 6 log.go:172] (0xc0024916b0) Data frame received for 5 I0622 22:02:51.064359 6 log.go:172] (0xc00222b040) (5) Data frame handling I0622 22:02:51.064408 6 log.go:172] (0xc0024916b0) Data frame received for 3 I0622 22:02:51.064441 6 log.go:172] (0xc0027e6640) (3) Data frame handling I0622 22:02:51.066647 6 log.go:172] (0xc0024916b0) Data frame received for 1 I0622 22:02:51.066674 6 log.go:172] (0xc0027e65a0) (1) Data frame handling I0622 22:02:51.066687 6 log.go:172] (0xc0027e65a0) (1) Data frame sent I0622 22:02:51.066702 6 log.go:172] (0xc0024916b0) (0xc0027e65a0) Stream removed, broadcasting: 1 I0622 22:02:51.066814 6 log.go:172] (0xc0024916b0) (0xc0027e65a0) Stream removed, broadcasting: 1 I0622 22:02:51.066832 6 log.go:172] (0xc0024916b0) (0xc0027e6640) Stream removed, broadcasting: 3 I0622 22:02:51.066899 6 log.go:172] (0xc0024916b0) Go away received I0622 22:02:51.067089 6 log.go:172] (0xc0024916b0) (0xc00222b040) Stream removed, broadcasting: 5 Jun 22 22:02:51.067: INFO: Waiting for responses: map[] Jun 22 22:02:51.070: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:8080/dial?request=hostname&protocol=udp&host=10.244.2.95&port=8081&tries=1'] Namespace:pod-network-test-8167 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:02:51.071: INFO: >>> kubeConfig: /root/.kube/config I0622 22:02:51.103274 6 log.go:172] (0xc00290c8f0) (0xc002844be0) Create stream I0622 22:02:51.103304 6 log.go:172] (0xc00290c8f0) (0xc002844be0) Stream added, broadcasting: 1 I0622 22:02:51.105501 6 log.go:172] (0xc00290c8f0) Reply frame received for 1 I0622 22:02:51.105529 6 log.go:172] (0xc00290c8f0) (0xc0027e6780) Create stream I0622 22:02:51.105537 6 log.go:172] (0xc00290c8f0) (0xc0027e6780) Stream added, broadcasting: 3 I0622 22:02:51.106460 6 log.go:172] (0xc00290c8f0) Reply frame received for 3 I0622 22:02:51.106514 6 log.go:172] (0xc00290c8f0) (0xc002844c80) Create stream I0622 22:02:51.106529 6 log.go:172] (0xc00290c8f0) (0xc002844c80) Stream added, broadcasting: 5 I0622 22:02:51.107384 6 log.go:172] (0xc00290c8f0) Reply frame received for 5 I0622 22:02:51.201751 6 log.go:172] (0xc00290c8f0) Data frame received for 3 I0622 22:02:51.201773 6 log.go:172] (0xc0027e6780) (3) Data frame handling I0622 22:02:51.201784 6 log.go:172] (0xc0027e6780) (3) Data frame sent I0622 22:02:51.202428 6 log.go:172] (0xc00290c8f0) Data frame received for 5 I0622 22:02:51.202453 6 log.go:172] (0xc002844c80) (5) Data frame handling I0622 22:02:51.202474 6 log.go:172] (0xc00290c8f0) Data frame received for 3 I0622 22:02:51.202484 6 log.go:172] (0xc0027e6780) (3) Data frame handling I0622 22:02:51.204060 6 log.go:172] (0xc00290c8f0) Data frame received for 1 I0622 22:02:51.204110 6 log.go:172] (0xc002844be0) (1) Data frame handling I0622 22:02:51.204150 6 log.go:172] (0xc002844be0) (1) Data frame sent I0622 22:02:51.204183 6 log.go:172] (0xc00290c8f0) (0xc002844be0) Stream removed, broadcasting: 1 I0622 22:02:51.204214 6 log.go:172] (0xc00290c8f0) Go away received I0622 22:02:51.204361 6 log.go:172] (0xc00290c8f0) (0xc002844be0) Stream removed, broadcasting: 1 I0622 22:02:51.204404 6 log.go:172] (0xc00290c8f0) (0xc0027e6780) Stream removed, broadcasting: 3 I0622 22:02:51.204422 6 log.go:172] (0xc00290c8f0) (0xc002844c80) Stream removed, broadcasting: 5 Jun 22 22:02:51.204: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:51.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8167" for this suite. • [SLOW TEST:28.507 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3319,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:51.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0622 22:02:52.328716 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 22:02:52.328: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:52.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9296" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":205,"skipped":3335,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:52.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jun 22 22:02:53.051: INFO: created pod pod-service-account-defaultsa Jun 22 22:02:53.051: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 22 22:02:53.082: INFO: created pod pod-service-account-mountsa Jun 22 22:02:53.082: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 22 22:02:53.110: INFO: created pod pod-service-account-nomountsa Jun 22 22:02:53.110: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 22 22:02:53.125: INFO: created pod pod-service-account-defaultsa-mountspec Jun 22 22:02:53.125: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 22 22:02:53.148: INFO: created pod pod-service-account-mountsa-mountspec Jun 22 22:02:53.148: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 22 22:02:53.180: INFO: created pod pod-service-account-nomountsa-mountspec Jun 22 22:02:53.180: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 22 22:02:53.226: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 22 22:02:53.226: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 22 22:02:53.239: INFO: created pod pod-service-account-mountsa-nomountspec Jun 22 22:02:53.239: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 22 22:02:53.281: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 22 22:02:53.281: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:02:53.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8706" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":206,"skipped":3335,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:02:53.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:02:54.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f" in namespace "projected-5501" to be "success or failure" Jun 22 22:02:54.364: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.202829ms Jun 22 22:02:56.580: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237991225s Jun 22 22:02:58.706: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363959676s Jun 22 22:03:00.850: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5084262s Jun 22 22:03:03.264: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921951315s Jun 22 22:03:05.499: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.157441705s Jun 22 22:03:07.555: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Running", Reason="", readiness=true. Elapsed: 13.213618538s Jun 22 22:03:09.559: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.21764454s STEP: Saw pod success Jun 22 22:03:09.559: INFO: Pod "downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f" satisfied condition "success or failure" Jun 22 22:03:09.563: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f container client-container: STEP: delete the pod Jun 22 22:03:09.652: INFO: Waiting for pod downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f to disappear Jun 22 22:03:09.657: INFO: Pod downwardapi-volume-e186f61b-75a8-47a1-8df3-b89c70d36b5f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5501" for this suite. • [SLOW TEST:15.762 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3343,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:09.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:03:10.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716" in namespace "projected-3701" to be "success or failure" Jun 22 22:03:10.193: INFO: Pod "downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716": Phase="Pending", Reason="", readiness=false. Elapsed: 88.99353ms Jun 22 22:03:12.197: INFO: Pod "downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09305121s Jun 22 22:03:14.202: INFO: Pod "downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09818055s STEP: Saw pod success Jun 22 22:03:14.202: INFO: Pod "downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716" satisfied condition "success or failure" Jun 22 22:03:14.204: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716 container client-container: STEP: delete the pod Jun 22 22:03:14.245: INFO: Waiting for pod downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716 to disappear Jun 22 22:03:14.281: INFO: Pod downwardapi-volume-4a9cfaab-2e75-4f45-800b-059c18ffe716 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:14.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3701" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:14.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4254" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":209,"skipped":3400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:18.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 22:03:18.999: INFO: Waiting up to 5m0s for pod "pod-2f422bb9-e61c-443c-a284-9effb30ca0a4" in namespace "emptydir-9788" to be "success or failure" Jun 22 22:03:19.018: INFO: Pod "pod-2f422bb9-e61c-443c-a284-9effb30ca0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.264852ms Jun 22 22:03:21.022: INFO: Pod "pod-2f422bb9-e61c-443c-a284-9effb30ca0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023312679s Jun 22 22:03:23.026: INFO: Pod "pod-2f422bb9-e61c-443c-a284-9effb30ca0a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02735886s STEP: Saw pod success Jun 22 22:03:23.026: INFO: Pod "pod-2f422bb9-e61c-443c-a284-9effb30ca0a4" satisfied condition "success or failure" Jun 22 22:03:23.030: INFO: Trying to get logs from node jerma-worker2 pod pod-2f422bb9-e61c-443c-a284-9effb30ca0a4 container test-container: STEP: delete the pod Jun 22 22:03:23.077: INFO: Waiting for pod pod-2f422bb9-e61c-443c-a284-9effb30ca0a4 to disappear Jun 22 22:03:23.088: INFO: Pod pod-2f422bb9-e61c-443c-a284-9effb30ca0a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:23.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9788" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:23.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jun 22 22:03:23.162: INFO: Waiting up to 5m0s for pod "var-expansion-687f3857-a73d-4dee-818b-555a07461500" in namespace "var-expansion-4690" to be "success or failure" Jun 22 22:03:23.170: INFO: Pod "var-expansion-687f3857-a73d-4dee-818b-555a07461500": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025716ms Jun 22 22:03:25.174: INFO: Pod "var-expansion-687f3857-a73d-4dee-818b-555a07461500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011686294s Jun 22 22:03:27.178: INFO: Pod "var-expansion-687f3857-a73d-4dee-818b-555a07461500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016055929s STEP: Saw pod success Jun 22 22:03:27.178: INFO: Pod "var-expansion-687f3857-a73d-4dee-818b-555a07461500" satisfied condition "success or failure" Jun 22 22:03:27.182: INFO: Trying to get logs from node jerma-worker pod var-expansion-687f3857-a73d-4dee-818b-555a07461500 container dapi-container: STEP: delete the pod Jun 22 22:03:27.288: INFO: Waiting for pod var-expansion-687f3857-a73d-4dee-818b-555a07461500 to disappear Jun 22 22:03:27.292: INFO: Pod var-expansion-687f3857-a73d-4dee-818b-555a07461500 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:27.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4690" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:27.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 22:03:31.896: INFO: Successfully updated pod "pod-update-activedeadlineseconds-250c2cbc-fbfb-469f-a3e0-1d3fd05759eb" Jun 22 22:03:31.896: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-250c2cbc-fbfb-469f-a3e0-1d3fd05759eb" in namespace "pods-1349" to be "terminated due to deadline exceeded" Jun 22 22:03:31.927: INFO: Pod "pod-update-activedeadlineseconds-250c2cbc-fbfb-469f-a3e0-1d3fd05759eb": Phase="Running", Reason="", readiness=true. Elapsed: 31.291491ms Jun 22 22:03:33.931: INFO: Pod "pod-update-activedeadlineseconds-250c2cbc-fbfb-469f-a3e0-1d3fd05759eb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.03460096s Jun 22 22:03:33.931: INFO: Pod "pod-update-activedeadlineseconds-250c2cbc-fbfb-469f-a3e0-1d3fd05759eb" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:33.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1349" for this suite. • [SLOW TEST:6.638 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3515,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d886a416-2f07-42eb-8a22-81cad947257f STEP: Creating a pod to test consume configMaps Jun 22 22:03:34.030: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31" in namespace "projected-3858" to be "success or failure" Jun 22 22:03:34.036: INFO: Pod "pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31": Phase="Pending", Reason="", readiness=false. Elapsed: 5.527436ms Jun 22 22:03:36.040: INFO: Pod "pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009742989s Jun 22 22:03:38.044: INFO: Pod "pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014416451s STEP: Saw pod success Jun 22 22:03:38.044: INFO: Pod "pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31" satisfied condition "success or failure" Jun 22 22:03:38.048: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31 container projected-configmap-volume-test: STEP: delete the pod Jun 22 22:03:38.071: INFO: Waiting for pod pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31 to disappear Jun 22 22:03:38.075: INFO: Pod pod-projected-configmaps-8f747ba2-36be-44ba-9cfd-41c52abaae31 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:38.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3858" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:38.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jun 22 22:03:42.697: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2107 pod-service-account-2b9c858f-75b9-4595-bbc5-096447729d49 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 22 22:03:42.929: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2107 pod-service-account-2b9c858f-75b9-4595-bbc5-096447729d49 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 22 22:03:43.190: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2107 pod-service-account-2b9c858f-75b9-4595-bbc5-096447729d49 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:43.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2107" for this suite. • [SLOW TEST:5.348 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":214,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:43.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1115.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1115.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1115.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1115.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1115.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1115.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 22:03:51.539: INFO: DNS probes using dns-1115/dns-test-41439e69-e7e9-4cbd-843a-0c7a6c586fda succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:03:51.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1115" for this suite. • [SLOW TEST:8.218 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":215,"skipped":3593,"failed":0} SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:03:51.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 22 22:04:02.087: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.087: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.115080 6 log.go:172] (0xc001bf4790) (0xc0029e0c80) Create stream I0622 22:04:02.115117 6 log.go:172] (0xc001bf4790) (0xc0029e0c80) Stream added, broadcasting: 1 I0622 22:04:02.117007 6 log.go:172] (0xc001bf4790) Reply frame received for 1 I0622 22:04:02.117058 6 log.go:172] (0xc001bf4790) (0xc00294b540) Create stream I0622 22:04:02.117079 6 log.go:172] (0xc001bf4790) (0xc00294b540) Stream added, broadcasting: 3 I0622 22:04:02.118474 6 log.go:172] (0xc001bf4790) Reply frame received for 3 I0622 22:04:02.118538 6 log.go:172] (0xc001bf4790) (0xc0029e0dc0) Create stream I0622 22:04:02.118558 6 log.go:172] (0xc001bf4790) (0xc0029e0dc0) Stream added, broadcasting: 5 I0622 22:04:02.119685 6 log.go:172] (0xc001bf4790) Reply frame received for 5 I0622 22:04:02.220795 6 log.go:172] (0xc001bf4790) Data frame received for 5 I0622 22:04:02.220847 6 log.go:172] (0xc0029e0dc0) (5) Data frame handling I0622 22:04:02.220893 6 log.go:172] (0xc001bf4790) Data frame received for 3 I0622 22:04:02.220916 6 log.go:172] (0xc00294b540) (3) Data frame handling I0622 22:04:02.220951 6 log.go:172] (0xc00294b540) (3) Data frame sent I0622 22:04:02.220976 6 log.go:172] (0xc001bf4790) Data frame received for 3 I0622 22:04:02.220998 6 log.go:172] (0xc00294b540) (3) Data frame handling I0622 22:04:02.223117 6 log.go:172] (0xc001bf4790) Data frame received for 1 I0622 22:04:02.223151 6 log.go:172] (0xc0029e0c80) (1) Data frame handling I0622 22:04:02.223167 6 log.go:172] (0xc0029e0c80) (1) Data frame sent I0622 22:04:02.223182 6 log.go:172] (0xc001bf4790) (0xc0029e0c80) Stream removed, broadcasting: 1 I0622 22:04:02.223294 6 log.go:172] (0xc001bf4790) (0xc0029e0c80) Stream removed, broadcasting: 1 I0622 22:04:02.223325 6 log.go:172] (0xc001bf4790) (0xc00294b540) Stream removed, broadcasting: 3 I0622 22:04:02.223507 6 log.go:172] (0xc001bf4790) (0xc0029e0dc0) Stream removed, broadcasting: 5 Jun 22 22:04:02.223: INFO: Exec stderr: "" Jun 22 22:04:02.223: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.223: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.225516 6 log.go:172] (0xc001bf4790) Go away received I0622 22:04:02.254538 6 log.go:172] (0xc00290c630) (0xc00294ba40) Create stream I0622 22:04:02.254568 6 log.go:172] (0xc00290c630) (0xc00294ba40) Stream added, broadcasting: 1 I0622 22:04:02.256096 6 log.go:172] (0xc00290c630) Reply frame received for 1 I0622 22:04:02.256135 6 log.go:172] (0xc00290c630) (0xc00294bb80) Create stream I0622 22:04:02.256148 6 log.go:172] (0xc00290c630) (0xc00294bb80) Stream added, broadcasting: 3 I0622 22:04:02.257082 6 log.go:172] (0xc00290c630) Reply frame received for 3 I0622 22:04:02.257293 6 log.go:172] (0xc00290c630) (0xc00294bcc0) Create stream I0622 22:04:02.257319 6 log.go:172] (0xc00290c630) (0xc00294bcc0) Stream added, broadcasting: 5 I0622 22:04:02.258666 6 log.go:172] (0xc00290c630) Reply frame received for 5 I0622 22:04:02.326646 6 log.go:172] (0xc00290c630) Data frame received for 3 I0622 22:04:02.326682 6 log.go:172] (0xc00294bb80) (3) Data frame handling I0622 22:04:02.326690 6 log.go:172] (0xc00294bb80) (3) Data frame sent I0622 22:04:02.326696 6 log.go:172] (0xc00290c630) Data frame received for 3 I0622 22:04:02.326702 6 log.go:172] (0xc00294bb80) (3) Data frame handling I0622 22:04:02.326730 6 log.go:172] (0xc00290c630) Data frame received for 5 I0622 22:04:02.326739 6 log.go:172] (0xc00294bcc0) (5) Data frame handling I0622 22:04:02.328068 6 log.go:172] (0xc00290c630) Data frame received for 1 I0622 22:04:02.328102 6 log.go:172] (0xc00294ba40) (1) Data frame handling I0622 22:04:02.328125 6 log.go:172] (0xc00294ba40) (1) Data frame sent I0622 22:04:02.328141 6 log.go:172] (0xc00290c630) (0xc00294ba40) Stream removed, broadcasting: 1 I0622 22:04:02.328163 6 log.go:172] (0xc00290c630) Go away received I0622 22:04:02.328304 6 log.go:172] (0xc00290c630) (0xc00294ba40) Stream removed, broadcasting: 1 I0622 22:04:02.328328 6 log.go:172] (0xc00290c630) (0xc00294bb80) Stream removed, broadcasting: 3 I0622 22:04:02.328335 6 log.go:172] (0xc00290c630) (0xc00294bcc0) Stream removed, broadcasting: 5 Jun 22 22:04:02.328: INFO: Exec stderr: "" Jun 22 22:04:02.328: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.328: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.365040 6 log.go:172] (0xc00299cc60) (0xc001505720) Create stream I0622 22:04:02.365070 6 log.go:172] (0xc00299cc60) (0xc001505720) Stream added, broadcasting: 1 I0622 22:04:02.367559 6 log.go:172] (0xc00299cc60) Reply frame received for 1 I0622 22:04:02.367595 6 log.go:172] (0xc00299cc60) (0xc0029e0f00) Create stream I0622 22:04:02.367607 6 log.go:172] (0xc00299cc60) (0xc0029e0f00) Stream added, broadcasting: 3 I0622 22:04:02.368671 6 log.go:172] (0xc00299cc60) Reply frame received for 3 I0622 22:04:02.368716 6 log.go:172] (0xc00299cc60) (0xc0029e1040) Create stream I0622 22:04:02.368729 6 log.go:172] (0xc00299cc60) (0xc0029e1040) Stream added, broadcasting: 5 I0622 22:04:02.369810 6 log.go:172] (0xc00299cc60) Reply frame received for 5 I0622 22:04:02.430207 6 log.go:172] (0xc00299cc60) Data frame received for 5 I0622 22:04:02.430233 6 log.go:172] (0xc0029e1040) (5) Data frame handling I0622 22:04:02.430261 6 log.go:172] (0xc00299cc60) Data frame received for 3 I0622 22:04:02.430270 6 log.go:172] (0xc0029e0f00) (3) Data frame handling I0622 22:04:02.430277 6 log.go:172] (0xc0029e0f00) (3) Data frame sent I0622 22:04:02.430284 6 log.go:172] (0xc00299cc60) Data frame received for 3 I0622 22:04:02.430290 6 log.go:172] (0xc0029e0f00) (3) Data frame handling I0622 22:04:02.431334 6 log.go:172] (0xc00299cc60) Data frame received for 1 I0622 22:04:02.431347 6 log.go:172] (0xc001505720) (1) Data frame handling I0622 22:04:02.431353 6 log.go:172] (0xc001505720) (1) Data frame sent I0622 22:04:02.431361 6 log.go:172] (0xc00299cc60) (0xc001505720) Stream removed, broadcasting: 1 I0622 22:04:02.431371 6 log.go:172] (0xc00299cc60) Go away received I0622 22:04:02.431532 6 log.go:172] (0xc00299cc60) (0xc001505720) Stream removed, broadcasting: 1 I0622 22:04:02.431555 6 log.go:172] (0xc00299cc60) (0xc0029e0f00) Stream removed, broadcasting: 3 I0622 22:04:02.431562 6 log.go:172] (0xc00299cc60) (0xc0029e1040) Stream removed, broadcasting: 5 Jun 22 22:04:02.431: INFO: Exec stderr: "" Jun 22 22:04:02.431: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.431: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.458890 6 log.go:172] (0xc00290cc60) (0xc0028bc140) Create stream I0622 22:04:02.458915 6 log.go:172] (0xc00290cc60) (0xc0028bc140) Stream added, broadcasting: 1 I0622 22:04:02.460241 6 log.go:172] (0xc00290cc60) Reply frame received for 1 I0622 22:04:02.460270 6 log.go:172] (0xc00290cc60) (0xc0028bc1e0) Create stream I0622 22:04:02.460280 6 log.go:172] (0xc00290cc60) (0xc0028bc1e0) Stream added, broadcasting: 3 I0622 22:04:02.461094 6 log.go:172] (0xc00290cc60) Reply frame received for 3 I0622 22:04:02.461307 6 log.go:172] (0xc00290cc60) (0xc0015059a0) Create stream I0622 22:04:02.461360 6 log.go:172] (0xc00290cc60) (0xc0015059a0) Stream added, broadcasting: 5 I0622 22:04:02.462242 6 log.go:172] (0xc00290cc60) Reply frame received for 5 I0622 22:04:02.504294 6 log.go:172] (0xc00290cc60) Data frame received for 5 I0622 22:04:02.504332 6 log.go:172] (0xc0015059a0) (5) Data frame handling I0622 22:04:02.504358 6 log.go:172] (0xc00290cc60) Data frame received for 3 I0622 22:04:02.504374 6 log.go:172] (0xc0028bc1e0) (3) Data frame handling I0622 22:04:02.504383 6 log.go:172] (0xc0028bc1e0) (3) Data frame sent I0622 22:04:02.504392 6 log.go:172] (0xc00290cc60) Data frame received for 3 I0622 22:04:02.504417 6 log.go:172] (0xc0028bc1e0) (3) Data frame handling I0622 22:04:02.505515 6 log.go:172] (0xc00290cc60) Data frame received for 1 I0622 22:04:02.505571 6 log.go:172] (0xc0028bc140) (1) Data frame handling I0622 22:04:02.505610 6 log.go:172] (0xc0028bc140) (1) Data frame sent I0622 22:04:02.505650 6 log.go:172] (0xc00290cc60) (0xc0028bc140) Stream removed, broadcasting: 1 I0622 22:04:02.505702 6 log.go:172] (0xc00290cc60) Go away received I0622 22:04:02.505767 6 log.go:172] (0xc00290cc60) (0xc0028bc140) Stream removed, broadcasting: 1 I0622 22:04:02.505782 6 log.go:172] (0xc00290cc60) (0xc0028bc1e0) Stream removed, broadcasting: 3 I0622 22:04:02.505792 6 log.go:172] (0xc00290cc60) (0xc0015059a0) Stream removed, broadcasting: 5 Jun 22 22:04:02.505: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 22 22:04:02.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.505: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.532208 6 log.go:172] (0xc00290d290) (0xc0028bc460) Create stream I0622 22:04:02.532236 6 log.go:172] (0xc00290d290) (0xc0028bc460) Stream added, broadcasting: 1 I0622 22:04:02.536029 6 log.go:172] (0xc00290d290) Reply frame received for 1 I0622 22:04:02.536071 6 log.go:172] (0xc00290d290) (0xc000cc3e00) Create stream I0622 22:04:02.536083 6 log.go:172] (0xc00290d290) (0xc000cc3e00) Stream added, broadcasting: 3 I0622 22:04:02.538101 6 log.go:172] (0xc00290d290) Reply frame received for 3 I0622 22:04:02.538139 6 log.go:172] (0xc00290d290) (0xc002310000) Create stream I0622 22:04:02.538154 6 log.go:172] (0xc00290d290) (0xc002310000) Stream added, broadcasting: 5 I0622 22:04:02.541650 6 log.go:172] (0xc00290d290) Reply frame received for 5 I0622 22:04:02.619626 6 log.go:172] (0xc00290d290) Data frame received for 5 I0622 22:04:02.619675 6 log.go:172] (0xc00290d290) Data frame received for 3 I0622 22:04:02.619714 6 log.go:172] (0xc000cc3e00) (3) Data frame handling I0622 22:04:02.619727 6 log.go:172] (0xc000cc3e00) (3) Data frame sent I0622 22:04:02.619765 6 log.go:172] (0xc00290d290) Data frame received for 3 I0622 22:04:02.619777 6 log.go:172] (0xc000cc3e00) (3) Data frame handling I0622 22:04:02.619804 6 log.go:172] (0xc002310000) (5) Data frame handling I0622 22:04:02.620957 6 log.go:172] (0xc00290d290) Data frame received for 1 I0622 22:04:02.620985 6 log.go:172] (0xc0028bc460) (1) Data frame handling I0622 22:04:02.621000 6 log.go:172] (0xc0028bc460) (1) Data frame sent I0622 22:04:02.621035 6 log.go:172] (0xc00290d290) (0xc0028bc460) Stream removed, broadcasting: 1 I0622 22:04:02.621057 6 log.go:172] (0xc00290d290) Go away received I0622 22:04:02.621361 6 log.go:172] (0xc00290d290) (0xc0028bc460) Stream removed, broadcasting: 1 I0622 22:04:02.621372 6 log.go:172] (0xc00290d290) (0xc000cc3e00) Stream removed, broadcasting: 3 I0622 22:04:02.621378 6 log.go:172] (0xc00290d290) (0xc002310000) Stream removed, broadcasting: 5 Jun 22 22:04:02.621: INFO: Exec stderr: "" Jun 22 22:04:02.621: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.621: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.654832 6 log.go:172] (0xc00290d8c0) (0xc0028bc780) Create stream I0622 22:04:02.654860 6 log.go:172] (0xc00290d8c0) (0xc0028bc780) Stream added, broadcasting: 1 I0622 22:04:02.657745 6 log.go:172] (0xc00290d8c0) Reply frame received for 1 I0622 22:04:02.657810 6 log.go:172] (0xc00290d8c0) (0xc0028bc820) Create stream I0622 22:04:02.657836 6 log.go:172] (0xc00290d8c0) (0xc0028bc820) Stream added, broadcasting: 3 I0622 22:04:02.658959 6 log.go:172] (0xc00290d8c0) Reply frame received for 3 I0622 22:04:02.659012 6 log.go:172] (0xc00290d8c0) (0xc001505c20) Create stream I0622 22:04:02.659038 6 log.go:172] (0xc00290d8c0) (0xc001505c20) Stream added, broadcasting: 5 I0622 22:04:02.660272 6 log.go:172] (0xc00290d8c0) Reply frame received for 5 I0622 22:04:02.737610 6 log.go:172] (0xc00290d8c0) Data frame received for 3 I0622 22:04:02.737670 6 log.go:172] (0xc0028bc820) (3) Data frame handling I0622 22:04:02.737693 6 log.go:172] (0xc0028bc820) (3) Data frame sent I0622 22:04:02.737716 6 log.go:172] (0xc00290d8c0) Data frame received for 3 I0622 22:04:02.737731 6 log.go:172] (0xc0028bc820) (3) Data frame handling I0622 22:04:02.737779 6 log.go:172] (0xc00290d8c0) Data frame received for 5 I0622 22:04:02.737825 6 log.go:172] (0xc001505c20) (5) Data frame handling I0622 22:04:02.739190 6 log.go:172] (0xc00290d8c0) Data frame received for 1 I0622 22:04:02.739227 6 log.go:172] (0xc0028bc780) (1) Data frame handling I0622 22:04:02.739264 6 log.go:172] (0xc0028bc780) (1) Data frame sent I0622 22:04:02.739287 6 log.go:172] (0xc00290d8c0) (0xc0028bc780) Stream removed, broadcasting: 1 I0622 22:04:02.739310 6 log.go:172] (0xc00290d8c0) Go away received I0622 22:04:02.739415 6 log.go:172] (0xc00290d8c0) (0xc0028bc780) Stream removed, broadcasting: 1 I0622 22:04:02.739428 6 log.go:172] (0xc00290d8c0) (0xc0028bc820) Stream removed, broadcasting: 3 I0622 22:04:02.739434 6 log.go:172] (0xc00290d8c0) (0xc001505c20) Stream removed, broadcasting: 5 Jun 22 22:04:02.739: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 22 22:04:02.739: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.739: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.771039 6 log.go:172] (0xc00299d290) (0xc001e041e0) Create stream I0622 22:04:02.771066 6 log.go:172] (0xc00299d290) (0xc001e041e0) Stream added, broadcasting: 1 I0622 22:04:02.773575 6 log.go:172] (0xc00299d290) Reply frame received for 1 I0622 22:04:02.773624 6 log.go:172] (0xc00299d290) (0xc0028bc8c0) Create stream I0622 22:04:02.773643 6 log.go:172] (0xc00299d290) (0xc0028bc8c0) Stream added, broadcasting: 3 I0622 22:04:02.774461 6 log.go:172] (0xc00299d290) Reply frame received for 3 I0622 22:04:02.774490 6 log.go:172] (0xc00299d290) (0xc0029e10e0) Create stream I0622 22:04:02.774500 6 log.go:172] (0xc00299d290) (0xc0029e10e0) Stream added, broadcasting: 5 I0622 22:04:02.775351 6 log.go:172] (0xc00299d290) Reply frame received for 5 I0622 22:04:02.841425 6 log.go:172] (0xc00299d290) Data frame received for 5 I0622 22:04:02.841458 6 log.go:172] (0xc0029e10e0) (5) Data frame handling I0622 22:04:02.841491 6 log.go:172] (0xc00299d290) Data frame received for 3 I0622 22:04:02.841518 6 log.go:172] (0xc0028bc8c0) (3) Data frame handling I0622 22:04:02.841543 6 log.go:172] (0xc0028bc8c0) (3) Data frame sent I0622 22:04:02.841560 6 log.go:172] (0xc00299d290) Data frame received for 3 I0622 22:04:02.841574 6 log.go:172] (0xc0028bc8c0) (3) Data frame handling I0622 22:04:02.843118 6 log.go:172] (0xc00299d290) Data frame received for 1 I0622 22:04:02.843137 6 log.go:172] (0xc001e041e0) (1) Data frame handling I0622 22:04:02.843145 6 log.go:172] (0xc001e041e0) (1) Data frame sent I0622 22:04:02.843244 6 log.go:172] (0xc00299d290) (0xc001e041e0) Stream removed, broadcasting: 1 I0622 22:04:02.843336 6 log.go:172] (0xc00299d290) (0xc001e041e0) Stream removed, broadcasting: 1 I0622 22:04:02.843351 6 log.go:172] (0xc00299d290) (0xc0028bc8c0) Stream removed, broadcasting: 3 I0622 22:04:02.843406 6 log.go:172] (0xc00299d290) Go away received I0622 22:04:02.843442 6 log.go:172] (0xc00299d290) (0xc0029e10e0) Stream removed, broadcasting: 5 Jun 22 22:04:02.843: INFO: Exec stderr: "" Jun 22 22:04:02.843: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.843: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.874840 6 log.go:172] (0xc001bf4dc0) (0xc0029e1360) Create stream I0622 22:04:02.874864 6 log.go:172] (0xc001bf4dc0) (0xc0029e1360) Stream added, broadcasting: 1 I0622 22:04:02.876679 6 log.go:172] (0xc001bf4dc0) Reply frame received for 1 I0622 22:04:02.876717 6 log.go:172] (0xc001bf4dc0) (0xc0017179a0) Create stream I0622 22:04:02.876729 6 log.go:172] (0xc001bf4dc0) (0xc0017179a0) Stream added, broadcasting: 3 I0622 22:04:02.877950 6 log.go:172] (0xc001bf4dc0) Reply frame received for 3 I0622 22:04:02.877982 6 log.go:172] (0xc001bf4dc0) (0xc0028bc960) Create stream I0622 22:04:02.877992 6 log.go:172] (0xc001bf4dc0) (0xc0028bc960) Stream added, broadcasting: 5 I0622 22:04:02.878686 6 log.go:172] (0xc001bf4dc0) Reply frame received for 5 I0622 22:04:02.942849 6 log.go:172] (0xc001bf4dc0) Data frame received for 5 I0622 22:04:02.942880 6 log.go:172] (0xc0028bc960) (5) Data frame handling I0622 22:04:02.942916 6 log.go:172] (0xc001bf4dc0) Data frame received for 3 I0622 22:04:02.942942 6 log.go:172] (0xc0017179a0) (3) Data frame handling I0622 22:04:02.942966 6 log.go:172] (0xc0017179a0) (3) Data frame sent I0622 22:04:02.942977 6 log.go:172] (0xc001bf4dc0) Data frame received for 3 I0622 22:04:02.942984 6 log.go:172] (0xc0017179a0) (3) Data frame handling I0622 22:04:02.944080 6 log.go:172] (0xc001bf4dc0) Data frame received for 1 I0622 22:04:02.944097 6 log.go:172] (0xc0029e1360) (1) Data frame handling I0622 22:04:02.944108 6 log.go:172] (0xc0029e1360) (1) Data frame sent I0622 22:04:02.944119 6 log.go:172] (0xc001bf4dc0) (0xc0029e1360) Stream removed, broadcasting: 1 I0622 22:04:02.944132 6 log.go:172] (0xc001bf4dc0) Go away received I0622 22:04:02.944267 6 log.go:172] (0xc001bf4dc0) (0xc0029e1360) Stream removed, broadcasting: 1 I0622 22:04:02.944287 6 log.go:172] (0xc001bf4dc0) (0xc0017179a0) Stream removed, broadcasting: 3 I0622 22:04:02.944298 6 log.go:172] (0xc001bf4dc0) (0xc0028bc960) Stream removed, broadcasting: 5 Jun 22 22:04:02.944: INFO: Exec stderr: "" Jun 22 22:04:02.944: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:02.944: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:02.973517 6 log.go:172] (0xc00299d8c0) (0xc001e043c0) Create stream I0622 22:04:02.973545 6 log.go:172] (0xc00299d8c0) (0xc001e043c0) Stream added, broadcasting: 1 I0622 22:04:02.975879 6 log.go:172] (0xc00299d8c0) Reply frame received for 1 I0622 22:04:02.975927 6 log.go:172] (0xc00299d8c0) (0xc0029e1400) Create stream I0622 22:04:02.975964 6 log.go:172] (0xc00299d8c0) (0xc0029e1400) Stream added, broadcasting: 3 I0622 22:04:02.976921 6 log.go:172] (0xc00299d8c0) Reply frame received for 3 I0622 22:04:02.976967 6 log.go:172] (0xc00299d8c0) (0xc001717a40) Create stream I0622 22:04:02.976984 6 log.go:172] (0xc00299d8c0) (0xc001717a40) Stream added, broadcasting: 5 I0622 22:04:02.978210 6 log.go:172] (0xc00299d8c0) Reply frame received for 5 I0622 22:04:03.051666 6 log.go:172] (0xc00299d8c0) Data frame received for 5 I0622 22:04:03.051716 6 log.go:172] (0xc001717a40) (5) Data frame handling I0622 22:04:03.051749 6 log.go:172] (0xc00299d8c0) Data frame received for 3 I0622 22:04:03.051764 6 log.go:172] (0xc0029e1400) (3) Data frame handling I0622 22:04:03.051785 6 log.go:172] (0xc0029e1400) (3) Data frame sent I0622 22:04:03.051800 6 log.go:172] (0xc00299d8c0) Data frame received for 3 I0622 22:04:03.051812 6 log.go:172] (0xc0029e1400) (3) Data frame handling I0622 22:04:03.052905 6 log.go:172] (0xc00299d8c0) Data frame received for 1 I0622 22:04:03.052938 6 log.go:172] (0xc001e043c0) (1) Data frame handling I0622 22:04:03.052959 6 log.go:172] (0xc001e043c0) (1) Data frame sent I0622 22:04:03.052980 6 log.go:172] (0xc00299d8c0) (0xc001e043c0) Stream removed, broadcasting: 1 I0622 22:04:03.053001 6 log.go:172] (0xc00299d8c0) Go away received I0622 22:04:03.053283 6 log.go:172] (0xc00299d8c0) (0xc001e043c0) Stream removed, broadcasting: 1 I0622 22:04:03.053318 6 log.go:172] (0xc00299d8c0) (0xc0029e1400) Stream removed, broadcasting: 3 I0622 22:04:03.053336 6 log.go:172] (0xc00299d8c0) (0xc001717a40) Stream removed, broadcasting: 5 Jun 22 22:04:03.053: INFO: Exec stderr: "" Jun 22 22:04:03.053: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5109 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:03.053: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:03.087773 6 log.go:172] (0xc002491080) (0xc001717ea0) Create stream I0622 22:04:03.087801 6 log.go:172] (0xc002491080) (0xc001717ea0) Stream added, broadcasting: 1 I0622 22:04:03.090180 6 log.go:172] (0xc002491080) Reply frame received for 1 I0622 22:04:03.090222 6 log.go:172] (0xc002491080) (0xc001e04460) Create stream I0622 22:04:03.090240 6 log.go:172] (0xc002491080) (0xc001e04460) Stream added, broadcasting: 3 I0622 22:04:03.091224 6 log.go:172] (0xc002491080) Reply frame received for 3 I0622 22:04:03.091264 6 log.go:172] (0xc002491080) (0xc0028bca00) Create stream I0622 22:04:03.091281 6 log.go:172] (0xc002491080) (0xc0028bca00) Stream added, broadcasting: 5 I0622 22:04:03.092141 6 log.go:172] (0xc002491080) Reply frame received for 5 I0622 22:04:03.162026 6 log.go:172] (0xc002491080) Data frame received for 5 I0622 22:04:03.162085 6 log.go:172] (0xc0028bca00) (5) Data frame handling I0622 22:04:03.162112 6 log.go:172] (0xc002491080) Data frame received for 3 I0622 22:04:03.162132 6 log.go:172] (0xc001e04460) (3) Data frame handling I0622 22:04:03.162155 6 log.go:172] (0xc001e04460) (3) Data frame sent I0622 22:04:03.162178 6 log.go:172] (0xc002491080) Data frame received for 3 I0622 22:04:03.162213 6 log.go:172] (0xc001e04460) (3) Data frame handling I0622 22:04:03.163685 6 log.go:172] (0xc002491080) Data frame received for 1 I0622 22:04:03.163716 6 log.go:172] (0xc001717ea0) (1) Data frame handling I0622 22:04:03.163739 6 log.go:172] (0xc001717ea0) (1) Data frame sent I0622 22:04:03.163761 6 log.go:172] (0xc002491080) (0xc001717ea0) Stream removed, broadcasting: 1 I0622 22:04:03.163818 6 log.go:172] (0xc002491080) Go away received I0622 22:04:03.163870 6 log.go:172] (0xc002491080) (0xc001717ea0) Stream removed, broadcasting: 1 I0622 22:04:03.163888 6 log.go:172] (0xc002491080) (0xc001e04460) Stream removed, broadcasting: 3 I0622 22:04:03.163908 6 log.go:172] (0xc002491080) (0xc0028bca00) Stream removed, broadcasting: 5 Jun 22 22:04:03.163: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:04:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5109" for this suite. • [SLOW TEST:11.524 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3601,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:04:03.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:04:03.304: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 22 22:04:08.307: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 22:04:08.307: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 22 22:04:08.396: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3682 /apis/apps/v1/namespaces/deployment-3682/deployments/test-cleanup-deployment 4ffd3ed0-a0ea-4da8-81ee-58999a2ab8ee 26494137 1 2020-06-22 22:04:08 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00250c3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 22 22:04:08.435: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3682 /apis/apps/v1/namespaces/deployment-3682/replicasets/test-cleanup-deployment-55ffc6b7b6 761e741a-435e-4cde-87fc-a4135e9ae036 26494146 1 2020-06-22 22:04:08 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4ffd3ed0-a0ea-4da8-81ee-58999a2ab8ee 0xc00366bcd7 0xc00366bcd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00366bd48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:04:08.435: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 22 22:04:08.435: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3682 /apis/apps/v1/namespaces/deployment-3682/replicasets/test-cleanup-controller dbb3dcb2-b62c-4f4b-937a-684cceb8be23 26494139 1 2020-06-22 22:04:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 4ffd3ed0-a0ea-4da8-81ee-58999a2ab8ee 0xc00366bc07 0xc00366bc08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00366bc68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:04:08.474: INFO: Pod "test-cleanup-controller-28q7m" is available: &Pod{ObjectMeta:{test-cleanup-controller-28q7m test-cleanup-controller- deployment-3682 /api/v1/namespaces/deployment-3682/pods/test-cleanup-controller-28q7m 5ea82fa3-6794-46a3-9d8b-1a534dd68b28 26494126 0 2020-06-22 22:04:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller dbb3dcb2-b62c-4f4b-937a-684cceb8be23 0xc0028c94c7 0xc0028c94c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svffn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svffn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svffn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.26,StartTime:2020-06-22 22:04:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 22:04:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a6c0011ba1eb54b8077ac3e0048388213696b4fe45fdf438dfc9cc2d1963946,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 22 22:04:08.474: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-tp748" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-tp748 test-cleanup-deployment-55ffc6b7b6- deployment-3682 /api/v1/namespaces/deployment-3682/pods/test-cleanup-deployment-55ffc6b7b6-tp748 128b249e-9b54-4628-bd1c-7b5f5a8d2ca3 26494147 0 2020-06-22 22:04:08 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 761e741a-435e-4cde-87fc-a4135e9ae036 0xc0028c9707 0xc0028c9708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svffn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svffn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svffn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:04:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3682" for this suite. • [SLOW TEST:5.381 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":217,"skipped":3615,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:04:08.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:04:09.177: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:04:11.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:04:13.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460249, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:04:16.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:04:16.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9073" for this suite. STEP: Destroying namespace "webhook-9073-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.921 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":218,"skipped":3622,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:04:16.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-77 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 22:04:16.581: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 22:04:45.054: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.27 8081 | grep -v '^\s*$'] Namespace:pod-network-test-77 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:45.054: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:45.089802 6 log.go:172] (0xc00299ce70) (0xc0023af360) Create stream I0622 22:04:45.089829 6 log.go:172] (0xc00299ce70) (0xc0023af360) Stream added, broadcasting: 1 I0622 22:04:45.101653 6 log.go:172] (0xc00299ce70) Reply frame received for 1 I0622 22:04:45.101705 6 log.go:172] (0xc00299ce70) (0xc0027e61e0) Create stream I0622 22:04:45.101719 6 log.go:172] (0xc00299ce70) (0xc0027e61e0) Stream added, broadcasting: 3 I0622 22:04:45.102762 6 log.go:172] (0xc00299ce70) Reply frame received for 3 I0622 22:04:45.102789 6 log.go:172] (0xc00299ce70) (0xc0027e6320) Create stream I0622 22:04:45.102797 6 log.go:172] (0xc00299ce70) (0xc0027e6320) Stream added, broadcasting: 5 I0622 22:04:45.104863 6 log.go:172] (0xc00299ce70) Reply frame received for 5 I0622 22:04:46.180908 6 log.go:172] (0xc00299ce70) Data frame received for 5 I0622 22:04:46.180948 6 log.go:172] (0xc0027e6320) (5) Data frame handling I0622 22:04:46.180982 6 log.go:172] (0xc00299ce70) Data frame received for 3 I0622 22:04:46.181000 6 log.go:172] (0xc0027e61e0) (3) Data frame handling I0622 22:04:46.181018 6 log.go:172] (0xc0027e61e0) (3) Data frame sent I0622 22:04:46.181270 6 log.go:172] (0xc00299ce70) Data frame received for 3 I0622 22:04:46.181284 6 log.go:172] (0xc0027e61e0) (3) Data frame handling I0622 22:04:46.182742 6 log.go:172] (0xc00299ce70) Data frame received for 1 I0622 22:04:46.182769 6 log.go:172] (0xc0023af360) (1) Data frame handling I0622 22:04:46.182794 6 log.go:172] (0xc0023af360) (1) Data frame sent I0622 22:04:46.182806 6 log.go:172] (0xc00299ce70) (0xc0023af360) Stream removed, broadcasting: 1 I0622 22:04:46.182822 6 log.go:172] (0xc00299ce70) Go away received I0622 22:04:46.182935 6 log.go:172] (0xc00299ce70) (0xc0023af360) Stream removed, broadcasting: 1 I0622 22:04:46.182957 6 log.go:172] (0xc00299ce70) (0xc0027e61e0) Stream removed, broadcasting: 3 I0622 22:04:46.182965 6 log.go:172] (0xc00299ce70) (0xc0027e6320) Stream removed, broadcasting: 5 Jun 22 22:04:46.182: INFO: Found all expected endpoints: [netserver-0] Jun 22 22:04:46.185: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.107 8081 | grep -v '^\s*$'] Namespace:pod-network-test-77 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:04:46.185: INFO: >>> kubeConfig: /root/.kube/config I0622 22:04:46.218071 6 log.go:172] (0xc00299d3f0) (0xc0023af540) Create stream I0622 22:04:46.218100 6 log.go:172] (0xc00299d3f0) (0xc0023af540) Stream added, broadcasting: 1 I0622 22:04:46.220027 6 log.go:172] (0xc00299d3f0) Reply frame received for 1 I0622 22:04:46.220079 6 log.go:172] (0xc00299d3f0) (0xc001504460) Create stream I0622 22:04:46.220093 6 log.go:172] (0xc00299d3f0) (0xc001504460) Stream added, broadcasting: 3 I0622 22:04:46.221731 6 log.go:172] (0xc00299d3f0) Reply frame received for 3 I0622 22:04:46.221788 6 log.go:172] (0xc00299d3f0) (0xc001504500) Create stream I0622 22:04:46.221810 6 log.go:172] (0xc00299d3f0) (0xc001504500) Stream added, broadcasting: 5 I0622 22:04:46.222810 6 log.go:172] (0xc00299d3f0) Reply frame received for 5 I0622 22:04:47.293375 6 log.go:172] (0xc00299d3f0) Data frame received for 3 I0622 22:04:47.293424 6 log.go:172] (0xc001504460) (3) Data frame handling I0622 22:04:47.293455 6 log.go:172] (0xc001504460) (3) Data frame sent I0622 22:04:47.293541 6 log.go:172] (0xc00299d3f0) Data frame received for 3 I0622 22:04:47.293620 6 log.go:172] (0xc001504460) (3) Data frame handling I0622 22:04:47.293989 6 log.go:172] (0xc00299d3f0) Data frame received for 5 I0622 22:04:47.294020 6 log.go:172] (0xc001504500) (5) Data frame handling I0622 22:04:47.295778 6 log.go:172] (0xc00299d3f0) Data frame received for 1 I0622 22:04:47.295799 6 log.go:172] (0xc0023af540) (1) Data frame handling I0622 22:04:47.295812 6 log.go:172] (0xc0023af540) (1) Data frame sent I0622 22:04:47.295840 6 log.go:172] (0xc00299d3f0) (0xc0023af540) Stream removed, broadcasting: 1 I0622 22:04:47.295887 6 log.go:172] (0xc00299d3f0) Go away received I0622 22:04:47.295930 6 log.go:172] (0xc00299d3f0) (0xc0023af540) Stream removed, broadcasting: 1 I0622 22:04:47.295950 6 log.go:172] (0xc00299d3f0) (0xc001504460) Stream removed, broadcasting: 3 I0622 22:04:47.295963 6 log.go:172] (0xc00299d3f0) (0xc001504500) Stream removed, broadcasting: 5 Jun 22 22:04:47.295: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:04:47.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-77" for this suite. • [SLOW TEST:30.828 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:04:47.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:04:47.390: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 22 22:04:47.425: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 22 22:04:52.438: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 22:04:52.438: INFO: Creating deployment "test-rolling-update-deployment" Jun 22 22:04:52.450: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 22 22:04:52.474: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 22 22:04:54.626: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 22 22:04:54.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:04:56.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460292, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:04:58.790: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 22 22:04:58.800: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3706 /apis/apps/v1/namespaces/deployment-3706/deployments/test-rolling-update-deployment 810651b2-463d-417f-9fcd-013faad40707 26494523 1 2020-06-22 22:04:52 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005430878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-22 22:04:52 +0000 UTC,LastTransitionTime:2020-06-22 22:04:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-06-22 22:04:56 +0000 UTC,LastTransitionTime:2020-06-22 22:04:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 22 22:04:58.804: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3706 /apis/apps/v1/namespaces/deployment-3706/replicasets/test-rolling-update-deployment-67cf4f6444 b059f54e-8c0f-49c2-9772-848b28330c0a 26494512 1 2020-06-22 22:04:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 810651b2-463d-417f-9fcd-013faad40707 0xc0028e3e57 0xc0028e3e58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028e3ec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:04:58.804: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 22 22:04:58.804: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3706 /apis/apps/v1/namespaces/deployment-3706/replicasets/test-rolling-update-controller 1393f6ea-c005-4dce-8a8e-818499264213 26494521 2 2020-06-22 22:04:47 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 810651b2-463d-417f-9fcd-013faad40707 0xc0028e3d6f 0xc0028e3d80}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028e3de8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:04:58.808: INFO: Pod "test-rolling-update-deployment-67cf4f6444-mmwmp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-mmwmp test-rolling-update-deployment-67cf4f6444- deployment-3706 /api/v1/namespaces/deployment-3706/pods/test-rolling-update-deployment-67cf4f6444-mmwmp 0e0b389c-bb8b-4cd2-a058-683416358c9d 26494511 0 2020-06-22 22:04:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 b059f54e-8c0f-49c2-9772-848b28330c0a 0xc005565047 0xc005565048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hv4ww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hv4ww,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hv4ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:04:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.28,StartTime:2020-06-22 22:04:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 22:04:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c363f1c058685a8f27171a215506d7989e46d215e147658f8a341c00d214b919,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:04:58.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3706" for this suite. • [SLOW TEST:11.505 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":220,"skipped":3652,"failed":0} SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:04:58.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-658231f7-e54d-4967-bd24-41030b960fc6 STEP: Creating configMap with name cm-test-opt-upd-e18c0c1a-baac-41a8-ac2e-8f05e8344639 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-658231f7-e54d-4967-bd24-41030b960fc6 STEP: Updating configmap cm-test-opt-upd-e18c0c1a-baac-41a8-ac2e-8f05e8344639 STEP: Creating configMap with name cm-test-opt-create-cdb91195-ce72-4847-b57b-a85e4e7bc81b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:05:09.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5382" for this suite. • [SLOW TEST:10.336 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3654,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:05:09.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4910 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4910 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4910 Jun 22 22:05:09.248: INFO: Found 0 stateful pods, waiting for 1 Jun 22 22:05:19.258: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 22 22:05:19.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 22:05:22.161: INFO: stderr: "I0622 22:05:22.008405 2319 log.go:172] (0xc00057ae70) (0xc000249b80) Create stream\nI0622 22:05:22.008446 2319 log.go:172] (0xc00057ae70) (0xc000249b80) Stream added, broadcasting: 1\nI0622 22:05:22.011055 2319 log.go:172] (0xc00057ae70) Reply frame received for 1\nI0622 22:05:22.011102 2319 log.go:172] (0xc00057ae70) (0xc00059a000) Create stream\nI0622 22:05:22.011117 2319 log.go:172] (0xc00057ae70) (0xc00059a000) Stream added, broadcasting: 3\nI0622 22:05:22.012045 2319 log.go:172] (0xc00057ae70) Reply frame received for 3\nI0622 22:05:22.012084 2319 log.go:172] (0xc00057ae70) (0xc0005cc000) Create stream\nI0622 22:05:22.012099 2319 log.go:172] (0xc00057ae70) (0xc0005cc000) Stream added, broadcasting: 5\nI0622 22:05:22.012960 2319 log.go:172] (0xc00057ae70) Reply frame received for 5\nI0622 22:05:22.117355 2319 log.go:172] (0xc00057ae70) Data frame received for 5\nI0622 22:05:22.117377 2319 log.go:172] (0xc0005cc000) (5) Data frame handling\nI0622 22:05:22.117390 2319 log.go:172] (0xc0005cc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 22:05:22.149091 2319 log.go:172] (0xc00057ae70) Data frame received for 3\nI0622 22:05:22.149252 2319 log.go:172] (0xc00059a000) (3) Data frame handling\nI0622 22:05:22.149273 2319 log.go:172] (0xc00059a000) (3) Data frame sent\nI0622 22:05:22.149908 2319 log.go:172] (0xc00057ae70) Data frame received for 5\nI0622 22:05:22.149939 2319 log.go:172] (0xc0005cc000) (5) Data frame handling\nI0622 22:05:22.150011 2319 log.go:172] (0xc00057ae70) Data frame received for 3\nI0622 22:05:22.150031 2319 log.go:172] (0xc00059a000) (3) Data frame handling\nI0622 22:05:22.152065 2319 log.go:172] (0xc00057ae70) Data frame received for 1\nI0622 22:05:22.152083 2319 log.go:172] (0xc000249b80) (1) Data frame handling\nI0622 22:05:22.152098 2319 log.go:172] (0xc000249b80) (1) Data frame sent\nI0622 22:05:22.152111 2319 log.go:172] (0xc00057ae70) (0xc000249b80) Stream removed, broadcasting: 1\nI0622 22:05:22.152235 2319 log.go:172] (0xc00057ae70) Go away received\nI0622 22:05:22.152399 2319 log.go:172] (0xc00057ae70) (0xc000249b80) Stream removed, broadcasting: 1\nI0622 22:05:22.152411 2319 log.go:172] (0xc00057ae70) (0xc00059a000) Stream removed, broadcasting: 3\nI0622 22:05:22.152417 2319 log.go:172] (0xc00057ae70) (0xc0005cc000) Stream removed, broadcasting: 5\n" Jun 22 22:05:22.161: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 22:05:22.161: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 22:05:22.198: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 22:05:32.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 22:05:32.203: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 22:05:32.217: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:05:32.217: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:05:32.217: INFO: Jun 22 22:05:32.217: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 22 22:05:33.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996971443s Jun 22 22:05:34.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96742328s Jun 22 22:05:35.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.82236177s Jun 22 22:05:36.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.811014938s Jun 22 22:05:37.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.769448664s Jun 22 22:05:38.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.76455471s Jun 22 22:05:39.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.760061232s Jun 22 22:05:40.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.755935656s Jun 22 22:05:41.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 750.998351ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4910 Jun 22 22:05:42.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 22:05:42.705: INFO: stderr: "I0622 22:05:42.611448 2354 log.go:172] (0xc0000f4f20) (0xc0006f5cc0) Create stream\nI0622 22:05:42.611509 2354 log.go:172] (0xc0000f4f20) (0xc0006f5cc0) Stream added, broadcasting: 1\nI0622 22:05:42.613979 2354 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0622 22:05:42.614014 2354 log.go:172] (0xc0000f4f20) (0xc000928000) Create stream\nI0622 22:05:42.614031 2354 log.go:172] (0xc0000f4f20) (0xc000928000) Stream added, broadcasting: 3\nI0622 22:05:42.614874 2354 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0622 22:05:42.614911 2354 log.go:172] (0xc0000f4f20) (0xc0009280a0) Create stream\nI0622 22:05:42.614925 2354 log.go:172] (0xc0000f4f20) (0xc0009280a0) Stream added, broadcasting: 5\nI0622 22:05:42.615683 2354 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0622 22:05:42.700552 2354 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0622 22:05:42.700589 2354 log.go:172] (0xc000928000) (3) Data frame handling\nI0622 22:05:42.700596 2354 log.go:172] (0xc000928000) (3) Data frame sent\nI0622 22:05:42.700602 2354 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0622 22:05:42.700606 2354 log.go:172] (0xc000928000) (3) Data frame handling\nI0622 22:05:42.700631 2354 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0622 22:05:42.700639 2354 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0622 22:05:42.700645 2354 log.go:172] (0xc0009280a0) (5) Data frame sent\nI0622 22:05:42.700656 2354 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0622 22:05:42.700661 2354 log.go:172] (0xc0009280a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0622 22:05:42.701869 2354 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0622 22:05:42.701883 2354 log.go:172] (0xc0006f5cc0) (1) Data frame handling\nI0622 22:05:42.701893 2354 log.go:172] (0xc0006f5cc0) (1) Data frame sent\nI0622 22:05:42.701909 2354 log.go:172] (0xc0000f4f20) (0xc0006f5cc0) Stream removed, broadcasting: 1\nI0622 22:05:42.701948 2354 log.go:172] (0xc0000f4f20) Go away received\nI0622 22:05:42.702212 2354 log.go:172] (0xc0000f4f20) (0xc0006f5cc0) Stream removed, broadcasting: 1\nI0622 22:05:42.702229 2354 log.go:172] (0xc0000f4f20) (0xc000928000) Stream removed, broadcasting: 3\nI0622 22:05:42.702239 2354 log.go:172] (0xc0000f4f20) (0xc0009280a0) Stream removed, broadcasting: 5\n" Jun 22 22:05:42.705: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 22:05:42.705: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 22:05:42.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 22:05:42.913: INFO: stderr: "I0622 22:05:42.832727 2376 log.go:172] (0xc0000f5600) (0xc0007421e0) Create stream\nI0622 22:05:42.832808 2376 log.go:172] (0xc0000f5600) (0xc0007421e0) Stream added, broadcasting: 1\nI0622 22:05:42.835325 2376 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0622 22:05:42.835361 2376 log.go:172] (0xc0000f5600) (0xc0005dfb80) Create stream\nI0622 22:05:42.835375 2376 log.go:172] (0xc0000f5600) (0xc0005dfb80) Stream added, broadcasting: 3\nI0622 22:05:42.836114 2376 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0622 22:05:42.836139 2376 log.go:172] (0xc0000f5600) (0xc000798820) Create stream\nI0622 22:05:42.836157 2376 log.go:172] (0xc0000f5600) (0xc000798820) Stream added, broadcasting: 5\nI0622 22:05:42.836868 2376 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0622 22:05:42.903824 2376 log.go:172] (0xc0000f5600) Data frame received for 5\nI0622 22:05:42.903887 2376 log.go:172] (0xc000798820) (5) Data frame handling\nI0622 22:05:42.903903 2376 log.go:172] (0xc000798820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0622 22:05:42.903929 2376 log.go:172] (0xc0000f5600) Data frame received for 3\nI0622 22:05:42.903956 2376 log.go:172] (0xc0005dfb80) (3) Data frame handling\nI0622 22:05:42.903970 2376 log.go:172] (0xc0005dfb80) (3) Data frame sent\nI0622 22:05:42.903982 2376 log.go:172] (0xc0000f5600) Data frame received for 3\nI0622 22:05:42.903992 2376 log.go:172] (0xc0005dfb80) (3) Data frame handling\nI0622 22:05:42.904024 2376 log.go:172] (0xc0000f5600) Data frame received for 5\nI0622 22:05:42.904040 2376 log.go:172] (0xc000798820) (5) Data frame handling\nI0622 22:05:42.905989 2376 log.go:172] (0xc0000f5600) Data frame received for 1\nI0622 22:05:42.906036 2376 log.go:172] (0xc0007421e0) (1) Data frame handling\nI0622 22:05:42.906078 2376 log.go:172] (0xc0007421e0) (1) Data frame sent\nI0622 22:05:42.906119 2376 log.go:172] (0xc0000f5600) (0xc0007421e0) Stream removed, broadcasting: 1\nI0622 22:05:42.906157 2376 log.go:172] (0xc0000f5600) Go away received\nI0622 22:05:42.906542 2376 log.go:172] (0xc0000f5600) (0xc0007421e0) Stream removed, broadcasting: 1\nI0622 22:05:42.906568 2376 log.go:172] (0xc0000f5600) (0xc0005dfb80) Stream removed, broadcasting: 3\nI0622 22:05:42.906580 2376 log.go:172] (0xc0000f5600) (0xc000798820) Stream removed, broadcasting: 5\n" Jun 22 22:05:42.913: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 22:05:42.913: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 22:05:42.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 22 22:05:43.129: INFO: stderr: "I0622 22:05:43.048993 2399 log.go:172] (0xc0008a86e0) (0xc0005b1d60) Create stream\nI0622 22:05:43.049057 2399 log.go:172] (0xc0008a86e0) (0xc0005b1d60) Stream added, broadcasting: 1\nI0622 22:05:43.051941 2399 log.go:172] (0xc0008a86e0) Reply frame received for 1\nI0622 22:05:43.051996 2399 log.go:172] (0xc0008a86e0) (0xc00085c000) Create stream\nI0622 22:05:43.052020 2399 log.go:172] (0xc0008a86e0) (0xc00085c000) Stream added, broadcasting: 3\nI0622 22:05:43.053284 2399 log.go:172] (0xc0008a86e0) Reply frame received for 3\nI0622 22:05:43.053327 2399 log.go:172] (0xc0008a86e0) (0xc0005b1e00) Create stream\nI0622 22:05:43.053340 2399 log.go:172] (0xc0008a86e0) (0xc0005b1e00) Stream added, broadcasting: 5\nI0622 22:05:43.054406 2399 log.go:172] (0xc0008a86e0) Reply frame received for 5\nI0622 22:05:43.120210 2399 log.go:172] (0xc0008a86e0) Data frame received for 5\nI0622 22:05:43.120254 2399 log.go:172] (0xc0008a86e0) Data frame received for 3\nI0622 22:05:43.120286 2399 log.go:172] (0xc00085c000) (3) Data frame handling\nI0622 22:05:43.120300 2399 log.go:172] (0xc00085c000) (3) Data frame sent\nI0622 22:05:43.120311 2399 log.go:172] (0xc0008a86e0) Data frame received for 3\nI0622 22:05:43.120323 2399 log.go:172] (0xc00085c000) (3) Data frame handling\nI0622 22:05:43.120363 2399 log.go:172] (0xc0005b1e00) (5) Data frame handling\nI0622 22:05:43.120389 2399 log.go:172] (0xc0005b1e00) (5) Data frame sent\nI0622 22:05:43.120402 2399 log.go:172] (0xc0008a86e0) Data frame received for 5\nI0622 22:05:43.120414 2399 log.go:172] (0xc0005b1e00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0622 22:05:43.122179 2399 log.go:172] (0xc0008a86e0) Data frame received for 1\nI0622 22:05:43.122200 2399 log.go:172] (0xc0005b1d60) (1) Data frame handling\nI0622 22:05:43.122220 2399 log.go:172] (0xc0005b1d60) (1) Data frame sent\nI0622 22:05:43.122379 2399 log.go:172] (0xc0008a86e0) (0xc0005b1d60) Stream removed, broadcasting: 1\nI0622 22:05:43.122407 2399 log.go:172] (0xc0008a86e0) Go away received\nI0622 22:05:43.122733 2399 log.go:172] (0xc0008a86e0) (0xc0005b1d60) Stream removed, broadcasting: 1\nI0622 22:05:43.122754 2399 log.go:172] (0xc0008a86e0) (0xc00085c000) Stream removed, broadcasting: 3\nI0622 22:05:43.122765 2399 log.go:172] (0xc0008a86e0) (0xc0005b1e00) Stream removed, broadcasting: 5\n" Jun 22 22:05:43.129: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 22 22:05:43.129: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 22 22:05:43.133: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 22 22:05:53.138: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 22:05:53.138: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 22:05:53.138: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 22 22:05:53.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 22:05:53.370: INFO: stderr: "I0622 22:05:53.270025 2420 log.go:172] (0xc000894b00) (0xc000abc1e0) Create stream\nI0622 22:05:53.270082 2420 log.go:172] (0xc000894b00) (0xc000abc1e0) Stream added, broadcasting: 1\nI0622 22:05:53.274427 2420 log.go:172] (0xc000894b00) Reply frame received for 1\nI0622 22:05:53.274479 2420 log.go:172] (0xc000894b00) (0xc0005526e0) Create stream\nI0622 22:05:53.274508 2420 log.go:172] (0xc000894b00) (0xc0005526e0) Stream added, broadcasting: 3\nI0622 22:05:53.275493 2420 log.go:172] (0xc000894b00) Reply frame received for 3\nI0622 22:05:53.275652 2420 log.go:172] (0xc000894b00) (0xc00060fae0) Create stream\nI0622 22:05:53.275664 2420 log.go:172] (0xc000894b00) (0xc00060fae0) Stream added, broadcasting: 5\nI0622 22:05:53.276583 2420 log.go:172] (0xc000894b00) Reply frame received for 5\nI0622 22:05:53.363090 2420 log.go:172] (0xc000894b00) Data frame received for 3\nI0622 22:05:53.363154 2420 log.go:172] (0xc0005526e0) (3) Data frame handling\nI0622 22:05:53.363171 2420 log.go:172] (0xc0005526e0) (3) Data frame sent\nI0622 22:05:53.363181 2420 log.go:172] (0xc000894b00) Data frame received for 3\nI0622 22:05:53.363190 2420 log.go:172] (0xc0005526e0) (3) Data frame handling\nI0622 22:05:53.363226 2420 log.go:172] (0xc000894b00) Data frame received for 5\nI0622 22:05:53.363238 2420 log.go:172] (0xc00060fae0) (5) Data frame handling\nI0622 22:05:53.363254 2420 log.go:172] (0xc00060fae0) (5) Data frame sent\nI0622 22:05:53.363263 2420 log.go:172] (0xc000894b00) Data frame received for 5\nI0622 22:05:53.363272 2420 log.go:172] (0xc00060fae0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 22:05:53.364495 2420 log.go:172] (0xc000894b00) Data frame received for 1\nI0622 22:05:53.364527 2420 log.go:172] (0xc000abc1e0) (1) Data frame handling\nI0622 22:05:53.364560 2420 log.go:172] (0xc000abc1e0) (1) Data frame sent\nI0622 22:05:53.364598 2420 log.go:172] (0xc000894b00) (0xc000abc1e0) Stream removed, broadcasting: 1\nI0622 22:05:53.364622 2420 log.go:172] (0xc000894b00) Go away received\nI0622 22:05:53.365403 2420 log.go:172] (0xc000894b00) (0xc000abc1e0) Stream removed, broadcasting: 1\nI0622 22:05:53.365428 2420 log.go:172] (0xc000894b00) (0xc0005526e0) Stream removed, broadcasting: 3\nI0622 22:05:53.365440 2420 log.go:172] (0xc000894b00) (0xc00060fae0) Stream removed, broadcasting: 5\n" Jun 22 22:05:53.370: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 22:05:53.370: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 22:05:53.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 22:05:53.634: INFO: stderr: "I0622 22:05:53.517395 2442 log.go:172] (0xc0009fc0b0) (0xc0006fb360) Create stream\nI0622 22:05:53.517499 2442 log.go:172] (0xc0009fc0b0) (0xc0006fb360) Stream added, broadcasting: 1\nI0622 22:05:53.520926 2442 log.go:172] (0xc0009fc0b0) Reply frame received for 1\nI0622 22:05:53.521018 2442 log.go:172] (0xc0009fc0b0) (0xc00088e000) Create stream\nI0622 22:05:53.521050 2442 log.go:172] (0xc0009fc0b0) (0xc00088e000) Stream added, broadcasting: 3\nI0622 22:05:53.522789 2442 log.go:172] (0xc0009fc0b0) Reply frame received for 3\nI0622 22:05:53.522819 2442 log.go:172] (0xc0009fc0b0) (0xc00088e0a0) Create stream\nI0622 22:05:53.522828 2442 log.go:172] (0xc0009fc0b0) (0xc00088e0a0) Stream added, broadcasting: 5\nI0622 22:05:53.523690 2442 log.go:172] (0xc0009fc0b0) Reply frame received for 5\nI0622 22:05:53.592461 2442 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0622 22:05:53.592494 2442 log.go:172] (0xc00088e0a0) (5) Data frame handling\nI0622 22:05:53.592514 2442 log.go:172] (0xc00088e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 22:05:53.623814 2442 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0622 22:05:53.623857 2442 log.go:172] (0xc00088e000) (3) Data frame handling\nI0622 22:05:53.623893 2442 log.go:172] (0xc00088e000) (3) Data frame sent\nI0622 22:05:53.623912 2442 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0622 22:05:53.623932 2442 log.go:172] (0xc00088e000) (3) Data frame handling\nI0622 22:05:53.624193 2442 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0622 22:05:53.624234 2442 log.go:172] (0xc00088e0a0) (5) Data frame handling\nI0622 22:05:53.625738 2442 log.go:172] (0xc0009fc0b0) Data frame received for 1\nI0622 22:05:53.625772 2442 log.go:172] (0xc0006fb360) (1) Data frame handling\nI0622 22:05:53.625796 2442 log.go:172] (0xc0006fb360) (1) Data frame sent\nI0622 22:05:53.625828 2442 log.go:172] (0xc0009fc0b0) (0xc0006fb360) Stream removed, broadcasting: 1\nI0622 22:05:53.625861 2442 log.go:172] (0xc0009fc0b0) Go away received\nI0622 22:05:53.626280 2442 log.go:172] (0xc0009fc0b0) (0xc0006fb360) Stream removed, broadcasting: 1\nI0622 22:05:53.626307 2442 log.go:172] (0xc0009fc0b0) (0xc00088e000) Stream removed, broadcasting: 3\nI0622 22:05:53.626326 2442 log.go:172] (0xc0009fc0b0) (0xc00088e0a0) Stream removed, broadcasting: 5\n" Jun 22 22:05:53.634: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 22:05:53.634: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 22:05:53.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4910 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 22 22:05:53.843: INFO: stderr: "I0622 22:05:53.755912 2462 log.go:172] (0xc000a20000) (0xc0007c4000) Create stream\nI0622 22:05:53.755995 2462 log.go:172] (0xc000a20000) (0xc0007c4000) Stream added, broadcasting: 1\nI0622 22:05:53.758064 2462 log.go:172] (0xc000a20000) Reply frame received for 1\nI0622 22:05:53.758108 2462 log.go:172] (0xc000a20000) (0xc000671cc0) Create stream\nI0622 22:05:53.758121 2462 log.go:172] (0xc000a20000) (0xc000671cc0) Stream added, broadcasting: 3\nI0622 22:05:53.758964 2462 log.go:172] (0xc000a20000) Reply frame received for 3\nI0622 22:05:53.758983 2462 log.go:172] (0xc000a20000) (0xc0002c1540) Create stream\nI0622 22:05:53.758991 2462 log.go:172] (0xc000a20000) (0xc0002c1540) Stream added, broadcasting: 5\nI0622 22:05:53.760048 2462 log.go:172] (0xc000a20000) Reply frame received for 5\nI0622 22:05:53.806001 2462 log.go:172] (0xc000a20000) Data frame received for 5\nI0622 22:05:53.806052 2462 log.go:172] (0xc0002c1540) (5) Data frame handling\nI0622 22:05:53.806087 2462 log.go:172] (0xc0002c1540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0622 22:05:53.834362 2462 log.go:172] (0xc000a20000) Data frame received for 5\nI0622 22:05:53.834393 2462 log.go:172] (0xc0002c1540) (5) Data frame handling\nI0622 22:05:53.834417 2462 log.go:172] (0xc000a20000) Data frame received for 3\nI0622 22:05:53.834428 2462 log.go:172] (0xc000671cc0) (3) Data frame handling\nI0622 22:05:53.834440 2462 log.go:172] (0xc000671cc0) (3) Data frame sent\nI0622 22:05:53.834485 2462 log.go:172] (0xc000a20000) Data frame received for 3\nI0622 22:05:53.834530 2462 log.go:172] (0xc000671cc0) (3) Data frame handling\nI0622 22:05:53.836363 2462 log.go:172] (0xc000a20000) Data frame received for 1\nI0622 22:05:53.836445 2462 log.go:172] (0xc0007c4000) (1) Data frame handling\nI0622 22:05:53.836476 2462 log.go:172] (0xc0007c4000) (1) Data frame sent\nI0622 22:05:53.836515 2462 log.go:172] (0xc000a20000) (0xc0007c4000) Stream removed, broadcasting: 1\nI0622 22:05:53.836546 2462 log.go:172] (0xc000a20000) Go away received\nI0622 22:05:53.836842 2462 log.go:172] (0xc000a20000) (0xc0007c4000) Stream removed, broadcasting: 1\nI0622 22:05:53.836860 2462 log.go:172] (0xc000a20000) (0xc000671cc0) Stream removed, broadcasting: 3\nI0622 22:05:53.836901 2462 log.go:172] (0xc000a20000) (0xc0002c1540) Stream removed, broadcasting: 5\n" Jun 22 22:05:53.843: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 22 22:05:53.843: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 22 22:05:53.843: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 22:05:53.846: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 22 22:06:03.859: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 22:06:03.859: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 22:06:03.859: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 22:06:03.906: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:03.907: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:03.907: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:03.907: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:03.907: INFO: Jun 22 22:06:03.907: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 22:06:04.910: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:04.910: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:04.911: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:04.911: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:04.911: INFO: Jun 22 22:06:04.911: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 22:06:05.988: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:05.988: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:05.988: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:05.988: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:05.988: INFO: Jun 22 22:06:05.988: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 22:06:07.002: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:07.002: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:07.002: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:07.002: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:32 +0000 UTC }] Jun 22 22:06:07.002: INFO: Jun 22 22:06:07.002: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 22:06:08.006: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:08.006: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:08.007: INFO: Jun 22 22:06:08.007: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 22 22:06:09.011: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 22:06:09.011: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 22:05:09 +0000 UTC }] Jun 22 22:06:09.011: INFO: Jun 22 22:06:09.011: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 22 22:06:10.016: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.851830918s Jun 22 22:06:11.020: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.847396139s Jun 22 22:06:12.025: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.843340385s Jun 22 22:06:13.028: INFO: Verifying statefulset ss doesn't scale past 0 for another 838.221874ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4910 Jun 22 22:06:14.033: INFO: Scaling statefulset ss to 0 Jun 22 22:06:14.040: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 22 22:06:14.042: INFO: Deleting all statefulset in ns statefulset-4910 Jun 22 22:06:14.043: INFO: Scaling statefulset ss to 0 Jun 22 22:06:14.049: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 22:06:14.056: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:14.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4910" for this suite. • [SLOW TEST:64.925 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":222,"skipped":3656,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:14.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:06:14.572: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:06:16.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:06:18.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460374, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:06:21.616: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 22 22:06:21.635: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:21.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7384" for this suite. STEP: Destroying namespace "webhook-7384-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.669 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":223,"skipped":3677,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:21.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:06:21.838: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:22.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5387" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":224,"skipped":3680,"failed":0} ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:22.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5cf0194d-6c84-4921-9e65-826d8a209664 STEP: Creating a pod to test consume secrets Jun 22 22:06:22.832: INFO: Waiting up to 5m0s for pod "pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc" in namespace "secrets-1982" to be "success or failure" Jun 22 22:06:22.847: INFO: Pod "pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.810405ms Jun 22 22:06:24.882: INFO: Pod "pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049857077s Jun 22 22:06:26.885: INFO: Pod "pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052557037s STEP: Saw pod success Jun 22 22:06:26.885: INFO: Pod "pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc" satisfied condition "success or failure" Jun 22 22:06:26.887: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc container secret-env-test: STEP: delete the pod Jun 22 22:06:27.103: INFO: Waiting for pod pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc to disappear Jun 22 22:06:27.136: INFO: Pod pod-secrets-604091ca-2fa6-4585-b069-47e14d545bcc no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:27.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1982" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3680,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:27.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-d2f89e65-cb0b-4a41-bbe3-66688fc72e70 STEP: Creating secret with name secret-projected-all-test-volume-ec367f3b-0329-48e4-a938-47e807ac06a3 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 22 22:06:27.376: INFO: Waiting up to 5m0s for pod "projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5" in namespace "projected-1106" to be "success or failure" Jun 22 22:06:27.429: INFO: Pod "projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.627652ms Jun 22 22:06:29.441: INFO: Pod "projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06485199s Jun 22 22:06:31.446: INFO: Pod "projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069125504s STEP: Saw pod success Jun 22 22:06:31.446: INFO: Pod "projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5" satisfied condition "success or failure" Jun 22 22:06:31.448: INFO: Trying to get logs from node jerma-worker pod projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5 container projected-all-volume-test: STEP: delete the pod Jun 22 22:06:31.525: INFO: Waiting for pod projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5 to disappear Jun 22 22:06:31.530: INFO: Pod projected-volume-695f50d6-7050-4ce7-ae5e-af28140792a5 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:31.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1106" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3693,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:31.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:06:31.580: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:32.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5997" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":227,"skipped":3698,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:32.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:06:32.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7" in namespace "downward-api-7954" to be "success or failure" Jun 22 22:06:32.896: INFO: Pod "downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232934ms Jun 22 22:06:34.901: INFO: Pod "downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007801411s Jun 22 22:06:36.905: INFO: Pod "downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011945964s STEP: Saw pod success Jun 22 22:06:36.905: INFO: Pod "downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7" satisfied condition "success or failure" Jun 22 22:06:36.908: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7 container client-container: STEP: delete the pod Jun 22 22:06:36.943: INFO: Waiting for pod downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7 to disappear Jun 22 22:06:36.956: INFO: Pod downwardapi-volume-7c92f472-9832-46fd-b72c-d016311332e7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:36.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7954" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:36.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jun 22 22:06:37.024: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4724" to be "success or failure" Jun 22 22:06:37.074: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 49.893643ms Jun 22 22:06:39.078: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054092344s Jun 22 22:06:41.083: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.058470048s Jun 22 22:06:43.087: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06288942s STEP: Saw pod success Jun 22 22:06:43.087: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 22 22:06:43.090: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 22 22:06:43.159: INFO: Waiting for pod pod-host-path-test to disappear Jun 22 22:06:43.172: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:43.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4724" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3746,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:43.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 22:06:47.386: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2043" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3757,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:47.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 22:06:51.574: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:06:51.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7694" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3774,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:06:51.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0622 22:07:22.221023 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 22:07:22.221: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:07:22.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-911" for this suite. • [SLOW TEST:30.604 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":232,"skipped":3786,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:07:22.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jun 22 22:07:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5847' Jun 22 22:07:22.672: INFO: stderr: "" Jun 22 22:07:22.672: INFO: stdout: "pod/pause created\n" Jun 22 22:07:22.672: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 22 22:07:22.672: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5847" to be "running and ready" Jun 22 22:07:22.677: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596292ms Jun 22 22:07:24.680: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007641621s Jun 22 22:07:26.684: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011966467s Jun 22 22:07:26.684: INFO: Pod "pause" satisfied condition "running and ready" Jun 22 22:07:26.684: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jun 22 22:07:26.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5847' Jun 22 22:07:26.793: INFO: stderr: "" Jun 22 22:07:26.793: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 22 22:07:26.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5847' Jun 22 22:07:26.877: INFO: stderr: "" Jun 22 22:07:26.877: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 22 22:07:26.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5847' Jun 22 22:07:26.964: INFO: stderr: "" Jun 22 22:07:26.964: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 22 22:07:26.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5847' Jun 22 22:07:27.045: INFO: stderr: "" Jun 22 22:07:27.045: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jun 22 22:07:27.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5847' Jun 22 22:07:27.179: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 22:07:27.179: INFO: stdout: "pod \"pause\" force deleted\n" Jun 22 22:07:27.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5847' Jun 22 22:07:27.714: INFO: stderr: "No resources found in kubectl-5847 namespace.\n" Jun 22 22:07:27.714: INFO: stdout: "" Jun 22 22:07:27.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5847 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 22:07:27.878: INFO: stderr: "" Jun 22 22:07:27.878: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:07:27.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5847" for this suite. • [SLOW TEST:5.735 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":233,"skipped":3794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:07:27.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 22 22:07:28.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7267' Jun 22 22:07:30.022: INFO: stderr: "" Jun 22 22:07:30.022: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 22:07:30.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:30.158: INFO: stderr: "" Jun 22 22:07:30.158: INFO: stdout: "update-demo-nautilus-9mwdx update-demo-nautilus-kkvsw " Jun 22 22:07:30.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:30.291: INFO: stderr: "" Jun 22 22:07:30.291: INFO: stdout: "" Jun 22 22:07:30.291: INFO: update-demo-nautilus-9mwdx is created but not running Jun 22 22:07:35.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:35.402: INFO: stderr: "" Jun 22 22:07:35.403: INFO: stdout: "update-demo-nautilus-9mwdx update-demo-nautilus-kkvsw " Jun 22 22:07:35.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:35.509: INFO: stderr: "" Jun 22 22:07:35.509: INFO: stdout: "true" Jun 22 22:07:35.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:35.608: INFO: stderr: "" Jun 22 22:07:35.608: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:07:35.608: INFO: validating pod update-demo-nautilus-9mwdx Jun 22 22:07:35.621: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:07:35.621: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:07:35.621: INFO: update-demo-nautilus-9mwdx is verified up and running Jun 22 22:07:35.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkvsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:35.714: INFO: stderr: "" Jun 22 22:07:35.714: INFO: stdout: "true" Jun 22 22:07:35.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkvsw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:35.805: INFO: stderr: "" Jun 22 22:07:35.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:07:35.805: INFO: validating pod update-demo-nautilus-kkvsw Jun 22 22:07:35.808: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:07:35.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:07:35.809: INFO: update-demo-nautilus-kkvsw is verified up and running STEP: scaling down the replication controller Jun 22 22:07:35.841: INFO: scanned /root for discovery docs: Jun 22 22:07:35.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7267' Jun 22 22:07:36.960: INFO: stderr: "" Jun 22 22:07:36.960: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 22:07:36.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:37.072: INFO: stderr: "" Jun 22 22:07:37.072: INFO: stdout: "update-demo-nautilus-9mwdx update-demo-nautilus-kkvsw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 22:07:42.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:42.186: INFO: stderr: "" Jun 22 22:07:42.186: INFO: stdout: "update-demo-nautilus-9mwdx update-demo-nautilus-kkvsw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 22:07:47.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:47.297: INFO: stderr: "" Jun 22 22:07:47.297: INFO: stdout: "update-demo-nautilus-9mwdx update-demo-nautilus-kkvsw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 22:07:52.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:52.415: INFO: stderr: "" Jun 22 22:07:52.415: INFO: stdout: "update-demo-nautilus-9mwdx " Jun 22 22:07:52.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:52.515: INFO: stderr: "" Jun 22 22:07:52.515: INFO: stdout: "true" Jun 22 22:07:52.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:52.618: INFO: stderr: "" Jun 22 22:07:52.618: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:07:52.618: INFO: validating pod update-demo-nautilus-9mwdx Jun 22 22:07:52.622: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:07:52.622: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:07:52.622: INFO: update-demo-nautilus-9mwdx is verified up and running STEP: scaling up the replication controller Jun 22 22:07:52.625: INFO: scanned /root for discovery docs: Jun 22 22:07:52.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7267' Jun 22 22:07:53.751: INFO: stderr: "" Jun 22 22:07:53.751: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 22:07:53.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:53.851: INFO: stderr: "" Jun 22 22:07:53.851: INFO: stdout: "update-demo-nautilus-696jq update-demo-nautilus-9mwdx " Jun 22 22:07:53.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-696jq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:53.933: INFO: stderr: "" Jun 22 22:07:53.933: INFO: stdout: "" Jun 22 22:07:53.933: INFO: update-demo-nautilus-696jq is created but not running Jun 22 22:07:58.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7267' Jun 22 22:07:59.044: INFO: stderr: "" Jun 22 22:07:59.044: INFO: stdout: "update-demo-nautilus-696jq update-demo-nautilus-9mwdx " Jun 22 22:07:59.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-696jq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:59.137: INFO: stderr: "" Jun 22 22:07:59.137: INFO: stdout: "true" Jun 22 22:07:59.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-696jq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:59.257: INFO: stderr: "" Jun 22 22:07:59.257: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:07:59.257: INFO: validating pod update-demo-nautilus-696jq Jun 22 22:07:59.261: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:07:59.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:07:59.261: INFO: update-demo-nautilus-696jq is verified up and running Jun 22 22:07:59.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:59.354: INFO: stderr: "" Jun 22 22:07:59.354: INFO: stdout: "true" Jun 22 22:07:59.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mwdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7267' Jun 22 22:07:59.452: INFO: stderr: "" Jun 22 22:07:59.452: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:07:59.452: INFO: validating pod update-demo-nautilus-9mwdx Jun 22 22:07:59.456: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:07:59.456: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:07:59.456: INFO: update-demo-nautilus-9mwdx is verified up and running STEP: using delete to clean up resources Jun 22 22:07:59.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7267' Jun 22 22:07:59.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 22:07:59.571: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 22:07:59.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7267' Jun 22 22:07:59.662: INFO: stderr: "No resources found in kubectl-7267 namespace.\n" Jun 22 22:07:59.662: INFO: stdout: "" Jun 22 22:07:59.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7267 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 22:07:59.761: INFO: stderr: "" Jun 22 22:07:59.761: INFO: stdout: "update-demo-nautilus-696jq\nupdate-demo-nautilus-9mwdx\n" Jun 22 22:08:00.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7267' Jun 22 22:08:00.373: INFO: stderr: "No resources found in kubectl-7267 namespace.\n" Jun 22 22:08:00.373: INFO: stdout: "" Jun 22 22:08:00.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7267 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 22:08:00.477: INFO: stderr: "" Jun 22 22:08:00.477: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:00.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7267" for this suite. • [SLOW TEST:32.520 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":234,"skipped":3818,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:00.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1934 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1934 I0622 22:08:00.978217 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1934, replica count: 2 I0622 22:08:04.028634 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 22:08:07.028889 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 22:08:07.028: INFO: Creating new exec pod Jun 22 22:08:12.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1934 execpod9cqcs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 22 22:08:12.299: INFO: stderr: "I0622 22:08:12.182137 3231 log.go:172] (0xc000a826e0) (0xc0006a1ea0) Create stream\nI0622 22:08:12.182198 3231 log.go:172] (0xc000a826e0) (0xc0006a1ea0) Stream added, broadcasting: 1\nI0622 22:08:12.184525 3231 log.go:172] (0xc000a826e0) Reply frame received for 1\nI0622 22:08:12.184579 3231 log.go:172] (0xc000a826e0) (0xc000652780) Create stream\nI0622 22:08:12.184617 3231 log.go:172] (0xc000a826e0) (0xc000652780) Stream added, broadcasting: 3\nI0622 22:08:12.185928 3231 log.go:172] (0xc000a826e0) Reply frame received for 3\nI0622 22:08:12.185967 3231 log.go:172] (0xc000a826e0) (0xc0006a1f40) Create stream\nI0622 22:08:12.185982 3231 log.go:172] (0xc000a826e0) (0xc0006a1f40) Stream added, broadcasting: 5\nI0622 22:08:12.186877 3231 log.go:172] (0xc000a826e0) Reply frame received for 5\nI0622 22:08:12.274049 3231 log.go:172] (0xc000a826e0) Data frame received for 5\nI0622 22:08:12.274091 3231 log.go:172] (0xc0006a1f40) (5) Data frame handling\nI0622 22:08:12.274126 3231 log.go:172] (0xc0006a1f40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0622 22:08:12.290213 3231 log.go:172] (0xc000a826e0) Data frame received for 5\nI0622 22:08:12.290232 3231 log.go:172] (0xc0006a1f40) (5) Data frame handling\nI0622 22:08:12.290243 3231 log.go:172] (0xc0006a1f40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0622 22:08:12.290577 3231 log.go:172] (0xc000a826e0) Data frame received for 5\nI0622 22:08:12.290608 3231 log.go:172] (0xc0006a1f40) (5) Data frame handling\nI0622 22:08:12.290758 3231 log.go:172] (0xc000a826e0) Data frame received for 3\nI0622 22:08:12.290784 3231 log.go:172] (0xc000652780) (3) Data frame handling\nI0622 22:08:12.292320 3231 log.go:172] (0xc000a826e0) Data frame received for 1\nI0622 22:08:12.292339 3231 log.go:172] (0xc0006a1ea0) (1) Data frame handling\nI0622 22:08:12.292351 3231 log.go:172] (0xc0006a1ea0) (1) Data frame sent\nI0622 22:08:12.292368 3231 log.go:172] (0xc000a826e0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0622 22:08:12.292642 3231 log.go:172] (0xc000a826e0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0622 22:08:12.292655 3231 log.go:172] (0xc000a826e0) (0xc000652780) Stream removed, broadcasting: 3\nI0622 22:08:12.292862 3231 log.go:172] (0xc000a826e0) (0xc0006a1f40) Stream removed, broadcasting: 5\nI0622 22:08:12.292945 3231 log.go:172] (0xc000a826e0) Go away received\n" Jun 22 22:08:12.299: INFO: stdout: "" Jun 22 22:08:12.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1934 execpod9cqcs -- /bin/sh -x -c nc -zv -t -w 2 10.111.186.116 80' Jun 22 22:08:12.514: INFO: stderr: "I0622 22:08:12.424522 3254 log.go:172] (0xc0009e8630) (0xc0009cc000) Create stream\nI0622 22:08:12.424576 3254 log.go:172] (0xc0009e8630) (0xc0009cc000) Stream added, broadcasting: 1\nI0622 22:08:12.427020 3254 log.go:172] (0xc0009e8630) Reply frame received for 1\nI0622 22:08:12.427069 3254 log.go:172] (0xc0009e8630) (0xc000a30000) Create stream\nI0622 22:08:12.427086 3254 log.go:172] (0xc0009e8630) (0xc000a30000) Stream added, broadcasting: 3\nI0622 22:08:12.428024 3254 log.go:172] (0xc0009e8630) Reply frame received for 3\nI0622 22:08:12.428065 3254 log.go:172] (0xc0009e8630) (0xc0006c1ae0) Create stream\nI0622 22:08:12.428080 3254 log.go:172] (0xc0009e8630) (0xc0006c1ae0) Stream added, broadcasting: 5\nI0622 22:08:12.428827 3254 log.go:172] (0xc0009e8630) Reply frame received for 5\nI0622 22:08:12.505334 3254 log.go:172] (0xc0009e8630) Data frame received for 3\nI0622 22:08:12.505555 3254 log.go:172] (0xc0009e8630) Data frame received for 5\nI0622 22:08:12.505611 3254 log.go:172] (0xc0006c1ae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.186.116 80\nConnection to 10.111.186.116 80 port [tcp/http] succeeded!\nI0622 22:08:12.505672 3254 log.go:172] (0xc000a30000) (3) Data frame handling\nI0622 22:08:12.505730 3254 log.go:172] (0xc0006c1ae0) (5) Data frame sent\nI0622 22:08:12.505758 3254 log.go:172] (0xc0009e8630) Data frame received for 5\nI0622 22:08:12.505768 3254 log.go:172] (0xc0006c1ae0) (5) Data frame handling\nI0622 22:08:12.507009 3254 log.go:172] (0xc0009e8630) Data frame received for 1\nI0622 22:08:12.507035 3254 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0622 22:08:12.507049 3254 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0622 22:08:12.507085 3254 log.go:172] (0xc0009e8630) (0xc0009cc000) Stream removed, broadcasting: 1\nI0622 22:08:12.507124 3254 log.go:172] (0xc0009e8630) Go away received\nI0622 22:08:12.507619 3254 log.go:172] (0xc0009e8630) (0xc0009cc000) Stream removed, broadcasting: 1\nI0622 22:08:12.507643 3254 log.go:172] (0xc0009e8630) (0xc000a30000) Stream removed, broadcasting: 3\nI0622 22:08:12.507655 3254 log.go:172] (0xc0009e8630) (0xc0006c1ae0) Stream removed, broadcasting: 5\n" Jun 22 22:08:12.514: INFO: stdout: "" Jun 22 22:08:12.514: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:12.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1934" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.116 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":235,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:12.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 22:08:20.737: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:20.744: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 22:08:22.744: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:22.750: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 22:08:24.744: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:24.749: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 22:08:26.744: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:26.748: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 22:08:28.744: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:28.749: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 22:08:30.744: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 22:08:30.749: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:30.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9159" for this suite. • [SLOW TEST:18.159 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:30.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:08:31.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:08:33.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:08:36.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4017" for this suite. STEP: Destroying namespace "webhook-4017-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.807 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":237,"skipped":3899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:37.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-0c4ede1c-4494-418b-b0e1-d0696755a81b STEP: Creating a pod to test consume secrets Jun 22 22:08:37.620: INFO: Waiting up to 5m0s for pod "pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e" in namespace "secrets-9215" to be "success or failure" Jun 22 22:08:37.624: INFO: Pod "pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.518339ms Jun 22 22:08:39.668: INFO: Pod "pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048595112s Jun 22 22:08:41.673: INFO: Pod "pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053096244s STEP: Saw pod success Jun 22 22:08:41.673: INFO: Pod "pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e" satisfied condition "success or failure" Jun 22 22:08:41.676: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e container secret-volume-test: STEP: delete the pod Jun 22 22:08:41.751: INFO: Waiting for pod pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e to disappear Jun 22 22:08:41.756: INFO: Pod pod-secrets-3fc6b801-8ab4-4eda-aa7c-5b442701ea1e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:41.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9215" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3925,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:41.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 22 22:08:41.818: INFO: Created pod &Pod{ObjectMeta:{dns-5096 dns-5096 /api/v1/namespaces/dns-5096/pods/dns-5096 ba6bb332-b3c3-4c31-85d1-94b4f41a74f8 26496053 0 2020-06-22 22:08:41 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgk9l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgk9l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgk9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jun 22 22:08:45.852: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5096 PodName:dns-5096 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:08:45.852: INFO: >>> kubeConfig: /root/.kube/config I0622 22:08:45.883395 6 log.go:172] (0xc00299ce70) (0xc0015059a0) Create stream I0622 22:08:45.883429 6 log.go:172] (0xc00299ce70) (0xc0015059a0) Stream added, broadcasting: 1 I0622 22:08:45.885795 6 log.go:172] (0xc00299ce70) Reply frame received for 1 I0622 22:08:45.885838 6 log.go:172] (0xc00299ce70) (0xc001505c20) Create stream I0622 22:08:45.885850 6 log.go:172] (0xc00299ce70) (0xc001505c20) Stream added, broadcasting: 3 I0622 22:08:45.886950 6 log.go:172] (0xc00299ce70) Reply frame received for 3 I0622 22:08:45.886991 6 log.go:172] (0xc00299ce70) (0xc002844460) Create stream I0622 22:08:45.887006 6 log.go:172] (0xc00299ce70) (0xc002844460) Stream added, broadcasting: 5 I0622 22:08:45.887815 6 log.go:172] (0xc00299ce70) Reply frame received for 5 I0622 22:08:45.987336 6 log.go:172] (0xc00299ce70) Data frame received for 3 I0622 22:08:45.987357 6 log.go:172] (0xc001505c20) (3) Data frame handling I0622 22:08:45.987368 6 log.go:172] (0xc001505c20) (3) Data frame sent I0622 22:08:45.988582 6 log.go:172] (0xc00299ce70) Data frame received for 5 I0622 22:08:45.988606 6 log.go:172] (0xc002844460) (5) Data frame handling I0622 22:08:45.988633 6 log.go:172] (0xc00299ce70) Data frame received for 3 I0622 22:08:45.988648 6 log.go:172] (0xc001505c20) (3) Data frame handling I0622 22:08:45.990544 6 log.go:172] (0xc00299ce70) Data frame received for 1 I0622 22:08:45.990564 6 log.go:172] (0xc0015059a0) (1) Data frame handling I0622 22:08:45.990578 6 log.go:172] (0xc0015059a0) (1) Data frame sent I0622 22:08:45.990701 6 log.go:172] (0xc00299ce70) (0xc0015059a0) Stream removed, broadcasting: 1 I0622 22:08:45.990764 6 log.go:172] (0xc00299ce70) Go away received I0622 22:08:45.990915 6 log.go:172] (0xc00299ce70) (0xc0015059a0) Stream removed, broadcasting: 1 I0622 22:08:45.990935 6 log.go:172] (0xc00299ce70) (0xc001505c20) Stream removed, broadcasting: 3 I0622 22:08:45.990950 6 log.go:172] (0xc00299ce70) (0xc002844460) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 22 22:08:45.990: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5096 PodName:dns-5096 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 22:08:45.991: INFO: >>> kubeConfig: /root/.kube/config I0622 22:08:46.019038 6 log.go:172] (0xc002490e70) (0xc000b5c8c0) Create stream I0622 22:08:46.019079 6 log.go:172] (0xc002490e70) (0xc000b5c8c0) Stream added, broadcasting: 1 I0622 22:08:46.020958 6 log.go:172] (0xc002490e70) Reply frame received for 1 I0622 22:08:46.021003 6 log.go:172] (0xc002490e70) (0xc002844640) Create stream I0622 22:08:46.021015 6 log.go:172] (0xc002490e70) (0xc002844640) Stream added, broadcasting: 3 I0622 22:08:46.022232 6 log.go:172] (0xc002490e70) Reply frame received for 3 I0622 22:08:46.022284 6 log.go:172] (0xc002490e70) (0xc000a6f4a0) Create stream I0622 22:08:46.022302 6 log.go:172] (0xc002490e70) (0xc000a6f4a0) Stream added, broadcasting: 5 I0622 22:08:46.023177 6 log.go:172] (0xc002490e70) Reply frame received for 5 I0622 22:08:46.135081 6 log.go:172] (0xc002490e70) Data frame received for 3 I0622 22:08:46.135134 6 log.go:172] (0xc002844640) (3) Data frame handling I0622 22:08:46.135166 6 log.go:172] (0xc002844640) (3) Data frame sent I0622 22:08:46.136978 6 log.go:172] (0xc002490e70) Data frame received for 5 I0622 22:08:46.137028 6 log.go:172] (0xc000a6f4a0) (5) Data frame handling I0622 22:08:46.137320 6 log.go:172] (0xc002490e70) Data frame received for 3 I0622 22:08:46.137337 6 log.go:172] (0xc002844640) (3) Data frame handling I0622 22:08:46.139231 6 log.go:172] (0xc002490e70) Data frame received for 1 I0622 22:08:46.139266 6 log.go:172] (0xc000b5c8c0) (1) Data frame handling I0622 22:08:46.139309 6 log.go:172] (0xc000b5c8c0) (1) Data frame sent I0622 22:08:46.139354 6 log.go:172] (0xc002490e70) (0xc000b5c8c0) Stream removed, broadcasting: 1 I0622 22:08:46.139389 6 log.go:172] (0xc002490e70) Go away received I0622 22:08:46.139483 6 log.go:172] (0xc002490e70) (0xc000b5c8c0) Stream removed, broadcasting: 1 I0622 22:08:46.139522 6 log.go:172] (0xc002490e70) (0xc002844640) Stream removed, broadcasting: 3 I0622 22:08:46.139541 6 log.go:172] (0xc002490e70) (0xc000a6f4a0) Stream removed, broadcasting: 5 Jun 22 22:08:46.139: INFO: Deleting pod dns-5096... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:46.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5096" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":239,"skipped":3939,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:46.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f94c379e-460e-43ba-b570-48a42081930d STEP: Creating secret with name s-test-opt-upd-da5e4c54-8451-4615-b478-d87925584429 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f94c379e-460e-43ba-b570-48a42081930d STEP: Updating secret s-test-opt-upd-da5e4c54-8451-4615-b478-d87925584429 STEP: Creating secret with name s-test-opt-create-0bff7206-07a5-479d-a6a3-e8d3a5462097 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:54.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4454" for this suite. • [SLOW TEST:8.604 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:54.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 22:08:54.914: INFO: Waiting up to 5m0s for pod "pod-6fb75cc1-3002-4f47-a379-0b670e3062f0" in namespace "emptydir-4363" to be "success or failure" Jun 22 22:08:54.943: INFO: Pod "pod-6fb75cc1-3002-4f47-a379-0b670e3062f0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.017132ms Jun 22 22:08:56.948: INFO: Pod "pod-6fb75cc1-3002-4f47-a379-0b670e3062f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033357565s Jun 22 22:08:58.953: INFO: Pod "pod-6fb75cc1-3002-4f47-a379-0b670e3062f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038740771s STEP: Saw pod success Jun 22 22:08:58.953: INFO: Pod "pod-6fb75cc1-3002-4f47-a379-0b670e3062f0" satisfied condition "success or failure" Jun 22 22:08:58.956: INFO: Trying to get logs from node jerma-worker pod pod-6fb75cc1-3002-4f47-a379-0b670e3062f0 container test-container: STEP: delete the pod Jun 22 22:08:58.978: INFO: Waiting for pod pod-6fb75cc1-3002-4f47-a379-0b670e3062f0 to disappear Jun 22 22:08:59.002: INFO: Pod pod-6fb75cc1-3002-4f47-a379-0b670e3062f0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:08:59.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4363" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3993,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:08:59.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:08:59.131: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42" in namespace "security-context-test-5273" to be "success or failure" Jun 22 22:08:59.134: INFO: Pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.903009ms Jun 22 22:09:01.138: INFO: Pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006759166s Jun 22 22:09:03.143: INFO: Pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42": Phase="Running", Reason="", readiness=true. Elapsed: 4.011575726s Jun 22 22:09:05.147: INFO: Pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015864926s Jun 22 22:09:05.147: INFO: Pod "busybox-readonly-false-2c27805c-ecdc-4761-8b59-93b13f691f42" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:05.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5273" for this suite. • [SLOW TEST:6.147 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3996,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:05.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:09:05.244: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.9088ms) Jun 22 22:09:05.299: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 54.683927ms) Jun 22 22:09:05.303: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.045461ms) Jun 22 22:09:05.307: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.869297ms) Jun 22 22:09:05.310: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.351403ms) Jun 22 22:09:05.313: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.006048ms) Jun 22 22:09:05.316: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.037698ms) Jun 22 22:09:05.320: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.213775ms) Jun 22 22:09:05.323: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.945913ms) Jun 22 22:09:05.327: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.375931ms) Jun 22 22:09:05.330: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.377573ms) Jun 22 22:09:05.334: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.1921ms) Jun 22 22:09:05.337: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.968217ms) Jun 22 22:09:05.340: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.298765ms) Jun 22 22:09:05.344: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.881581ms) Jun 22 22:09:05.347: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.723146ms) Jun 22 22:09:05.352: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.873567ms) Jun 22 22:09:05.355: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.925364ms) Jun 22 22:09:05.358: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.496676ms) Jun 22 22:09:05.361: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.787739ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:05.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9455" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":243,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:05.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:21.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8976" for this suite. • [SLOW TEST:16.223 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":244,"skipped":4044,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:21.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 22 22:09:21.671: INFO: Waiting up to 5m0s for pod "downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4" in namespace "downward-api-1143" to be "success or failure" Jun 22 22:09:21.680: INFO: Pod "downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.913431ms Jun 22 22:09:23.684: INFO: Pod "downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01304272s Jun 22 22:09:25.688: INFO: Pod "downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017129387s STEP: Saw pod success Jun 22 22:09:25.688: INFO: Pod "downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4" satisfied condition "success or failure" Jun 22 22:09:25.691: INFO: Trying to get logs from node jerma-worker pod downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4 container dapi-container: STEP: delete the pod Jun 22 22:09:25.868: INFO: Waiting for pod downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4 to disappear Jun 22 22:09:25.877: INFO: Pod downward-api-2d1c2d49-6555-47d1-b7a6-8a889c4841a4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:25.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1143" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4052,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:25.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 22 22:09:25.935: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1511" for this suite. • [SLOW TEST:16.566 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":246,"skipped":4060,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:42.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:09:42.511: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:09:48.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5574" for this suite. • [SLOW TEST:6.022 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":247,"skipped":4063,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:09:48.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:09:48.554: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 22 22:09:53.580: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 22:09:53.580: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 22 22:09:55.584: INFO: Creating deployment "test-rollover-deployment" Jun 22 22:09:55.598: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 22 22:09:57.603: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 22 22:09:57.610: INFO: Ensure that both replica sets have 1 created replica Jun 22 22:09:57.615: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 22 22:09:57.621: INFO: Updating deployment test-rollover-deployment Jun 22 22:09:57.621: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 22 22:09:59.645: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 22 22:09:59.651: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 22 22:09:59.656: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:09:59.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460597, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:01.664: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:10:01.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460601, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:03.664: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:10:03.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460601, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:05.663: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:10:05.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460601, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:07.664: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:10:07.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460601, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:09.673: INFO: all replica sets need to contain the pod-template-hash label Jun 22 22:10:09.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460601, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:10:11.666: INFO: Jun 22 22:10:11.666: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 22 22:10:11.673: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4185 /apis/apps/v1/namespaces/deployment-4185/deployments/test-rollover-deployment 14de1075-2113-4a05-addd-b1ea76e22d43 26496702 2 2020-06-22 22:09:55 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042a6488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-22 22:09:55 +0000 UTC,LastTransitionTime:2020-06-22 22:09:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-06-22 22:10:11 +0000 UTC,LastTransitionTime:2020-06-22 22:09:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 22 22:10:11.676: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-deployment-574d6dfbff 8d7cfce8-24ca-4331-b959-bf292892545b 26496691 2 2020-06-22 22:09:57 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 14de1075-2113-4a05-addd-b1ea76e22d43 0xc00423c1f7 0xc00423c1f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00423c268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:10:11.676: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 22 22:10:11.676: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-controller e347346b-dcee-4416-ae84-675b70cf2f08 26496700 2 2020-06-22 22:09:48 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 14de1075-2113-4a05-addd-b1ea76e22d43 0xc00423c127 0xc00423c128}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00423c188 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:10:11.677: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-deployment-f6c94f66c 24809f9e-8f85-4a7b-8d0f-d198de8d09fb 26496638 2 2020-06-22 22:09:55 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 14de1075-2113-4a05-addd-b1ea76e22d43 0xc00423c2d0 0xc00423c2d1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00423c348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 22 22:10:11.680: INFO: Pod "test-rollover-deployment-574d6dfbff-tbl77" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-tbl77 test-rollover-deployment-574d6dfbff- deployment-4185 /api/v1/namespaces/deployment-4185/pods/test-rollover-deployment-574d6dfbff-tbl77 dc606bbc-0643-4694-8a4f-0e51f496f367 26496659 0 2020-06-22 22:09:57 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 8d7cfce8-24ca-4331-b959-bf292892545b 0xc0042a6827 0xc0042a6828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8jlrf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8jlrf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8jlrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-22 22:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.46,StartTime:2020-06-22 22:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-22 22:10:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://32a147b408502e426d8a3928b6a8a9fc74b67fec23b3366a186efc61759e9dd5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:10:11.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4185" for this suite. • [SLOW TEST:23.215 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":248,"skipped":4067,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:10:11.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vpdrl in namespace proxy-421 I0622 22:10:12.107063 6 runners.go:189] Created replication controller with name: proxy-service-vpdrl, namespace: proxy-421, replica count: 1 I0622 22:10:13.157523 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 22:10:14.157698 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 22:10:15.157931 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 22:10:16.158156 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 22:10:17.158378 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 22:10:18.158607 6 runners.go:189] proxy-service-vpdrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 22:10:18.184: INFO: setup took 6.430180465s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 22 22:10:18.190: INFO: (0) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 5.311589ms) Jun 22 22:10:18.190: INFO: (0) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 6.932904ms) Jun 22 22:10:18.191: INFO: (0) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 6.974044ms) Jun 22 22:10:18.196: INFO: (0) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 11.536772ms) Jun 22 22:10:18.196: INFO: (0) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 11.596401ms) Jun 22 22:10:18.196: INFO: (0) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 11.608775ms) Jun 22 22:10:18.196: INFO: (0) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 11.774042ms) Jun 22 22:10:18.197: INFO: (0) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 12.33801ms) Jun 22 22:10:18.197: INFO: (0) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 12.570214ms) Jun 22 22:10:18.197: INFO: (0) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 12.52871ms) Jun 22 22:10:18.201: INFO: (0) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 17.146136ms) Jun 22 22:10:18.201: INFO: (0) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 17.031476ms) Jun 22 22:10:18.202: INFO: (0) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 17.895601ms) Jun 22 22:10:18.202: INFO: (0) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 18.008286ms) Jun 22 22:10:18.203: INFO: (0) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 3.831254ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 4.095717ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 4.031165ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 4.20553ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.224533ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.498193ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 4.523166ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 4.64066ms) Jun 22 22:10:18.207: INFO: (1) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtestt... (200; 3.78004ms) Jun 22 22:10:18.213: INFO: (2) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 5.495841ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 5.545876ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 5.589242ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 5.576868ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 5.614155ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 5.607469ms) Jun 22 22:10:18.214: INFO: (2) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 5.665548ms) Jun 22 22:10:18.220: INFO: (3) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: test (200; 7.648718ms) Jun 22 22:10:18.221: INFO: (3) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 7.5997ms) Jun 22 22:10:18.221: INFO: (3) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 7.599735ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 7.965985ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 8.071673ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 8.045567ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 8.023004ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 8.002734ms) Jun 22 22:10:18.222: INFO: (3) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 2.743311ms) Jun 22 22:10:18.227: INFO: (4) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.454133ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 4.669229ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.729367ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 4.786254ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 4.688284ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testt... (200; 4.780913ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 4.900144ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 5.076469ms) Jun 22 22:10:18.228: INFO: (4) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 5.301522ms) Jun 22 22:10:18.230: INFO: (4) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 5.992443ms) Jun 22 22:10:18.236: INFO: (5) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 6.035982ms) Jun 22 22:10:18.236: INFO: (5) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 5.987507ms) Jun 22 22:10:18.236: INFO: (5) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 6.053994ms) Jun 22 22:10:18.236: INFO: (5) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 4.932827ms) Jun 22 22:10:18.241: INFO: (6) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.919757ms) Jun 22 22:10:18.241: INFO: (6) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testt... (200; 5.839498ms) Jun 22 22:10:18.242: INFO: (6) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 5.779447ms) Jun 22 22:10:18.242: INFO: (6) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 6.101045ms) Jun 22 22:10:18.242: INFO: (6) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 6.037383ms) Jun 22 22:10:18.243: INFO: (6) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 6.27006ms) Jun 22 22:10:18.243: INFO: (6) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 6.42536ms) Jun 22 22:10:18.243: INFO: (6) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 6.423015ms) Jun 22 22:10:18.243: INFO: (6) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 7.096835ms) Jun 22 22:10:18.243: INFO: (6) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 7.117355ms) Jun 22 22:10:18.248: INFO: (7) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.071793ms) Jun 22 22:10:18.248: INFO: (7) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 4.169718ms) Jun 22 22:10:18.248: INFO: (7) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 7.220256ms) Jun 22 22:10:18.254: INFO: (8) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.26589ms) Jun 22 22:10:18.254: INFO: (8) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 3.490009ms) Jun 22 22:10:18.254: INFO: (8) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 3.501015ms) Jun 22 22:10:18.255: INFO: (8) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 3.723999ms) Jun 22 22:10:18.255: INFO: (8) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 3.759051ms) Jun 22 22:10:18.255: INFO: (8) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtesttest (200; 4.139032ms) Jun 22 22:10:18.260: INFO: (9) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 4.182241ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 5.323203ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 5.425988ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 5.449799ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 5.538781ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 5.539001ms) Jun 22 22:10:18.262: INFO: (9) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 5.594124ms) Jun 22 22:10:18.265: INFO: (10) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 2.534926ms) Jun 22 22:10:18.265: INFO: (10) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 2.779715ms) Jun 22 22:10:18.265: INFO: (10) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: testtesttest (200; 4.230375ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 4.204938ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 4.25366ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.29616ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 4.250078ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 4.298179ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 4.380682ms) Jun 22 22:10:18.272: INFO: (11) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 4.464117ms) Jun 22 22:10:18.276: INFO: (12) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 3.226711ms) Jun 22 22:10:18.276: INFO: (12) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.36847ms) Jun 22 22:10:18.276: INFO: (12) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 3.783795ms) Jun 22 22:10:18.276: INFO: (12) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.955088ms) Jun 22 22:10:18.277: INFO: (12) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: testtest (200; 4.981308ms) Jun 22 22:10:18.277: INFO: (12) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 5.07941ms) Jun 22 22:10:18.277: INFO: (12) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 5.030004ms) Jun 22 22:10:18.277: INFO: (12) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.96505ms) Jun 22 22:10:18.277: INFO: (12) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.955155ms) Jun 22 22:10:18.278: INFO: (12) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 5.71849ms) Jun 22 22:10:18.283: INFO: (13) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testt... (200; 8.752539ms) Jun 22 22:10:18.287: INFO: (13) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 8.996259ms) Jun 22 22:10:18.287: INFO: (13) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 9.031666ms) Jun 22 22:10:18.288: INFO: (13) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 9.330836ms) Jun 22 22:10:18.288: INFO: (13) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 9.651722ms) Jun 22 22:10:18.288: INFO: (13) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: test (200; 9.72978ms) Jun 22 22:10:18.288: INFO: (13) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 9.752421ms) Jun 22 22:10:18.288: INFO: (13) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 9.798489ms) Jun 22 22:10:18.292: INFO: (14) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 3.389993ms) Jun 22 22:10:18.292: INFO: (14) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 3.617729ms) Jun 22 22:10:18.292: INFO: (14) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.294177ms) Jun 22 22:10:18.292: INFO: (14) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:1080/proxy/: t... (200; 3.367924ms) Jun 22 22:10:18.292: INFO: (14) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 3.839897ms) Jun 22 22:10:18.293: INFO: (14) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 4.332738ms) Jun 22 22:10:18.293: INFO: (14) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 3.684902ms) Jun 22 22:10:18.293: INFO: (14) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.295933ms) Jun 22 22:10:18.293: INFO: (14) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testt... (200; 3.164154ms) Jun 22 22:10:18.297: INFO: (15) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 3.165661ms) Jun 22 22:10:18.297: INFO: (15) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.318477ms) Jun 22 22:10:18.297: INFO: (15) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.39559ms) Jun 22 22:10:18.297: INFO: (15) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 3.399023ms) Jun 22 22:10:18.298: INFO: (15) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 3.873454ms) Jun 22 22:10:18.298: INFO: (15) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 4.302369ms) Jun 22 22:10:18.298: INFO: (15) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 4.268819ms) Jun 22 22:10:18.298: INFO: (15) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.468873ms) Jun 22 22:10:18.298: INFO: (15) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.462643ms) Jun 22 22:10:18.301: INFO: (16) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 2.684999ms) Jun 22 22:10:18.301: INFO: (16) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q/proxy/: test (200; 2.843567ms) Jun 22 22:10:18.301: INFO: (16) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 2.900587ms) Jun 22 22:10:18.302: INFO: (16) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testt... (200; 4.3171ms) Jun 22 22:10:18.303: INFO: (16) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 4.268697ms) Jun 22 22:10:18.303: INFO: (16) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 4.329972ms) Jun 22 22:10:18.303: INFO: (16) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 10.822514ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 10.98327ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 11.049045ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 11.004296ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 11.22629ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 11.228425ms) Jun 22 22:10:18.315: INFO: (17) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 11.307723ms) Jun 22 22:10:18.317: INFO: (17) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtest (200; 13.865901ms) Jun 22 22:10:18.318: INFO: (17) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: t... (200; 13.160092ms) Jun 22 22:10:18.344: INFO: (18) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: test (200; 13.247583ms) Jun 22 22:10:18.344: INFO: (18) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 13.15996ms) Jun 22 22:10:18.344: INFO: (18) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 13.167417ms) Jun 22 22:10:18.344: INFO: (18) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 13.189393ms) Jun 22 22:10:18.344: INFO: (18) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname1/proxy/: foo (200; 13.227154ms) Jun 22 22:10:18.345: INFO: (18) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 13.603012ms) Jun 22 22:10:18.345: INFO: (18) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:1080/proxy/: testtestt... (200; 3.757456ms) Jun 22 22:10:18.352: INFO: (19) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 3.94313ms) Jun 22 22:10:18.352: INFO: (19) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname1/proxy/: foo (200; 4.045303ms) Jun 22 22:10:18.352: INFO: (19) /api/v1/namespaces/proxy-421/services/proxy-service-vpdrl:portname2/proxy/: bar (200; 4.010064ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:462/proxy/: tls qux (200; 4.203692ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:460/proxy/: tls baz (200; 4.220664ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/services/http:proxy-service-vpdrl:portname2/proxy/: bar (200; 4.207451ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/pods/proxy-service-vpdrl-tcv5q:160/proxy/: foo (200; 4.20707ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/pods/https:proxy-service-vpdrl-tcv5q:443/proxy/: test (200; 4.395432ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/pods/http:proxy-service-vpdrl-tcv5q:162/proxy/: bar (200; 4.463272ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname2/proxy/: tls qux (200; 4.412332ms) Jun 22 22:10:18.353: INFO: (19) /api/v1/namespaces/proxy-421/services/https:proxy-service-vpdrl:tlsportname1/proxy/: tls baz (200; 5.1569ms) STEP: deleting ReplicationController proxy-service-vpdrl in namespace proxy-421, will wait for the garbage collector to delete the pods Jun 22 22:10:18.411: INFO: Deleting ReplicationController proxy-service-vpdrl took: 6.25247ms Jun 22 22:10:18.512: INFO: Terminating ReplicationController proxy-service-vpdrl pods took: 100.214502ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:10:29.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-421" for this suite. • [SLOW TEST:17.887 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":249,"skipped":4074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:10:29.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-904dd485-e80e-4587-aead-ec4b3e0c674e STEP: Creating a pod to test consume secrets Jun 22 22:10:29.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56" in namespace "projected-4538" to be "success or failure" Jun 22 22:10:29.736: INFO: Pod "pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.851344ms Jun 22 22:10:31.831: INFO: Pod "pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097850977s Jun 22 22:10:33.927: INFO: Pod "pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.193614585s STEP: Saw pod success Jun 22 22:10:33.927: INFO: Pod "pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56" satisfied condition "success or failure" Jun 22 22:10:33.930: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56 container projected-secret-volume-test: STEP: delete the pod Jun 22 22:10:33.960: INFO: Waiting for pod pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56 to disappear Jun 22 22:10:33.982: INFO: Pod pod-projected-secrets-8000c6bf-41b7-410c-96d1-6bdc6bb27f56 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:10:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4538" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4114,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:10:33.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-2c3b3f84-3386-43d5-a6f9-b9f4adb410e8 in namespace container-probe-4452 Jun 22 22:10:38.112: INFO: Started pod busybox-2c3b3f84-3386-43d5-a6f9-b9f4adb410e8 in namespace container-probe-4452 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 22:10:38.115: INFO: Initial restart count of pod busybox-2c3b3f84-3386-43d5-a6f9-b9f4adb410e8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:14:38.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4452" for this suite. • [SLOW TEST:244.908 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4124,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:14:38.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:14:38.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 22 22:14:39.136: INFO: stderr: "" Jun 22 22:14:39.136: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:28:04Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:14:39.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8275" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":252,"skipped":4124,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:14:39.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ac17848c-8a7f-4d80-9b64-379a9d631b15 STEP: Creating a pod to test consume secrets Jun 22 22:14:39.240: INFO: Waiting up to 5m0s for pod "pod-secrets-817c9359-d97d-4289-839b-5566fce71431" in namespace "secrets-9539" to be "success or failure" Jun 22 22:14:39.244: INFO: Pod "pod-secrets-817c9359-d97d-4289-839b-5566fce71431": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34132ms Jun 22 22:14:41.248: INFO: Pod "pod-secrets-817c9359-d97d-4289-839b-5566fce71431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008105879s Jun 22 22:14:43.252: INFO: Pod "pod-secrets-817c9359-d97d-4289-839b-5566fce71431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01198839s STEP: Saw pod success Jun 22 22:14:43.252: INFO: Pod "pod-secrets-817c9359-d97d-4289-839b-5566fce71431" satisfied condition "success or failure" Jun 22 22:14:43.255: INFO: Trying to get logs from node jerma-worker pod pod-secrets-817c9359-d97d-4289-839b-5566fce71431 container secret-volume-test: STEP: delete the pod Jun 22 22:14:43.300: INFO: Waiting for pod pod-secrets-817c9359-d97d-4289-839b-5566fce71431 to disappear Jun 22 22:14:43.422: INFO: Pod pod-secrets-817c9359-d97d-4289-839b-5566fce71431 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:14:43.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9539" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4127,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:14:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:14:47.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4989" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":254,"skipped":4149,"failed":0} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:14:47.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:14:51.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6128" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:14:51.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0622 22:15:03.466555 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 22:15:03.466: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:15:03.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7946" for this suite. • [SLOW TEST:11.618 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":256,"skipped":4184,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:15:03.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:15:04.800: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:15:06.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460904, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460904, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460905, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728460904, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:15:09.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:15:09.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4995-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:15:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2202" for this suite. STEP: Destroying namespace "webhook-2202-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.319 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":257,"skipped":4204,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:15:11.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 in namespace container-probe-9578 Jun 22 22:15:16.238: INFO: Started pod liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 in namespace container-probe-9578 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 22:15:16.241: INFO: Initial restart count of pod liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is 0 Jun 22 22:15:34.908: INFO: Restart count of pod container-probe-9578/liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is now 1 (18.66661597s elapsed) Jun 22 22:15:52.964: INFO: Restart count of pod container-probe-9578/liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is now 2 (36.722492661s elapsed) Jun 22 22:16:11.019: INFO: Restart count of pod container-probe-9578/liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is now 3 (54.778259514s elapsed) Jun 22 22:16:31.066: INFO: Restart count of pod container-probe-9578/liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is now 4 (1m14.824689266s elapsed) Jun 22 22:17:41.233: INFO: Restart count of pod container-probe-9578/liveness-bb362b64-ae2b-4777-adfb-147fe6d1b342 is now 5 (2m24.992197136s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:17:41.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9578" for this suite. • [SLOW TEST:149.504 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4204,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:17:41.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:17:41.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-20" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":259,"skipped":4204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:17:41.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 22 22:17:42.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 22 22:17:44.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461062, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461062, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461062, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461062, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:17:47.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:17:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-606" for this suite. STEP: Destroying namespace "webhook-606-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.463 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":260,"skipped":4320,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:17:48.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:17:48.178: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 22 22:17:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 create -f -' Jun 22 22:17:56.298: INFO: stderr: "" Jun 22 22:17:56.298: INFO: stdout: "e2e-test-crd-publish-openapi-8191-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 22 22:17:56.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 delete e2e-test-crd-publish-openapi-8191-crds test-foo' Jun 22 22:17:56.414: INFO: stderr: "" Jun 22 22:17:56.414: INFO: stdout: "e2e-test-crd-publish-openapi-8191-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 22 22:17:56.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 apply -f -' Jun 22 22:17:59.390: INFO: stderr: "" Jun 22 22:17:59.390: INFO: stdout: "e2e-test-crd-publish-openapi-8191-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 22 22:17:59.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 delete e2e-test-crd-publish-openapi-8191-crds test-foo' Jun 22 22:17:59.522: INFO: stderr: "" Jun 22 22:17:59.522: INFO: stdout: "e2e-test-crd-publish-openapi-8191-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 22 22:17:59.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 create -f -' Jun 22 22:18:02.646: INFO: rc: 1 Jun 22 22:18:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 apply -f -' Jun 22 22:18:05.567: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 22 22:18:05.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 create -f -' Jun 22 22:18:08.569: INFO: rc: 1 Jun 22 22:18:08.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8230 apply -f -' Jun 22 22:18:11.527: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 22 22:18:11.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8191-crds' Jun 22 22:18:14.951: INFO: stderr: "" Jun 22 22:18:14.951: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8191-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 22 22:18:14.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8191-crds.metadata' Jun 22 22:18:17.554: INFO: stderr: "" Jun 22 22:18:17.554: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8191-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 22 22:18:17.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8191-crds.spec' Jun 22 22:18:20.135: INFO: stderr: "" Jun 22 22:18:20.135: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8191-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 22 22:18:20.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8191-crds.spec.bars' Jun 22 22:18:24.016: INFO: stderr: "" Jun 22 22:18:24.016: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8191-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 22 22:18:24.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8191-crds.spec.bars2' Jun 22 22:18:24.383: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:18:26.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8230" for this suite. • [SLOW TEST:38.158 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":261,"skipped":4322,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:18:26.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 22 22:18:26.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3437' Jun 22 22:18:26.688: INFO: stderr: "" Jun 22 22:18:26.688: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 22 22:18:27.692: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:27.693: INFO: Found 0 / 1 Jun 22 22:18:28.742: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:28.743: INFO: Found 0 / 1 Jun 22 22:18:29.693: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:29.693: INFO: Found 0 / 1 Jun 22 22:18:30.692: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:30.692: INFO: Found 1 / 1 Jun 22 22:18:30.692: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 22 22:18:30.696: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:30.696: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 22:18:30.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-gxhpj --namespace=kubectl-3437 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 22 22:18:30.800: INFO: stderr: "" Jun 22 22:18:30.800: INFO: stdout: "pod/agnhost-master-gxhpj patched\n" STEP: checking annotations Jun 22 22:18:30.803: INFO: Selector matched 1 pods for map[app:agnhost] Jun 22 22:18:30.803: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:18:30.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3437" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":262,"skipped":4324,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:18:30.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 22:18:30.942: INFO: Waiting up to 5m0s for pod "pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8" in namespace "emptydir-9451" to be "success or failure" Jun 22 22:18:30.980: INFO: Pod "pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 37.783147ms Jun 22 22:18:32.983: INFO: Pod "pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04147385s Jun 22 22:18:34.988: INFO: Pod "pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046276804s STEP: Saw pod success Jun 22 22:18:34.988: INFO: Pod "pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8" satisfied condition "success or failure" Jun 22 22:18:34.991: INFO: Trying to get logs from node jerma-worker2 pod pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8 container test-container: STEP: delete the pod Jun 22 22:18:35.024: INFO: Waiting for pod pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8 to disappear Jun 22 22:18:35.083: INFO: Pod pod-03c321cc-6d24-4ed5-9786-0ff99db87fc8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:18:35.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9451" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:18:35.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jun 22 22:18:35.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9564' Jun 22 22:18:35.392: INFO: stderr: "" Jun 22 22:18:35.392: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 22:18:35.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9564' Jun 22 22:18:35.506: INFO: stderr: "" Jun 22 22:18:35.506: INFO: stdout: "update-demo-nautilus-kqtfs update-demo-nautilus-m5kwb " Jun 22 22:18:35.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kqtfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:18:35.593: INFO: stderr: "" Jun 22 22:18:35.593: INFO: stdout: "" Jun 22 22:18:35.593: INFO: update-demo-nautilus-kqtfs is created but not running Jun 22 22:18:40.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9564' Jun 22 22:18:40.688: INFO: stderr: "" Jun 22 22:18:40.688: INFO: stdout: "update-demo-nautilus-kqtfs update-demo-nautilus-m5kwb " Jun 22 22:18:40.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kqtfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:18:40.773: INFO: stderr: "" Jun 22 22:18:40.773: INFO: stdout: "true" Jun 22 22:18:40.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kqtfs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:18:40.864: INFO: stderr: "" Jun 22 22:18:40.864: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:18:40.864: INFO: validating pod update-demo-nautilus-kqtfs Jun 22 22:18:40.907: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:18:40.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:18:40.907: INFO: update-demo-nautilus-kqtfs is verified up and running Jun 22 22:18:40.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5kwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:18:41.001: INFO: stderr: "" Jun 22 22:18:41.001: INFO: stdout: "true" Jun 22 22:18:41.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5kwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:18:41.099: INFO: stderr: "" Jun 22 22:18:41.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 22:18:41.099: INFO: validating pod update-demo-nautilus-m5kwb Jun 22 22:18:41.104: INFO: got data: { "image": "nautilus.jpg" } Jun 22 22:18:41.104: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 22:18:41.104: INFO: update-demo-nautilus-m5kwb is verified up and running STEP: rolling-update to new replication controller Jun 22 22:18:41.132: INFO: scanned /root for discovery docs: Jun 22 22:18:41.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9564' Jun 22 22:19:03.755: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 22:19:03.755: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 22:19:03.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9564' Jun 22 22:19:03.859: INFO: stderr: "" Jun 22 22:19:03.859: INFO: stdout: "update-demo-kitten-hjfk6 update-demo-kitten-ljmcm " Jun 22 22:19:03.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hjfk6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:19:03.944: INFO: stderr: "" Jun 22 22:19:03.944: INFO: stdout: "true" Jun 22 22:19:03.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hjfk6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:19:04.041: INFO: stderr: "" Jun 22 22:19:04.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 22:19:04.041: INFO: validating pod update-demo-kitten-hjfk6 Jun 22 22:19:04.052: INFO: got data: { "image": "kitten.jpg" } Jun 22 22:19:04.052: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 22:19:04.052: INFO: update-demo-kitten-hjfk6 is verified up and running Jun 22 22:19:04.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ljmcm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:19:04.150: INFO: stderr: "" Jun 22 22:19:04.150: INFO: stdout: "true" Jun 22 22:19:04.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ljmcm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9564' Jun 22 22:19:04.248: INFO: stderr: "" Jun 22 22:19:04.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 22:19:04.248: INFO: validating pod update-demo-kitten-ljmcm Jun 22 22:19:04.268: INFO: got data: { "image": "kitten.jpg" } Jun 22 22:19:04.268: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 22:19:04.268: INFO: update-demo-kitten-ljmcm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:04.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9564" for this suite. • [SLOW TEST:29.181 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":264,"skipped":4348,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:04.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:19:08.482: INFO: Waiting up to 5m0s for pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f" in namespace "pods-5735" to be "success or failure" Jun 22 22:19:08.494: INFO: Pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.843079ms Jun 22 22:19:10.611: INFO: Pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12891534s Jun 22 22:19:12.615: INFO: Pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.133237966s Jun 22 22:19:14.619: INFO: Pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13740331s STEP: Saw pod success Jun 22 22:19:14.620: INFO: Pod "client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f" satisfied condition "success or failure" Jun 22 22:19:14.623: INFO: Trying to get logs from node jerma-worker pod client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f container env3cont: STEP: delete the pod Jun 22 22:19:14.662: INFO: Waiting for pod client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f to disappear Jun 22 22:19:14.666: INFO: Pod client-envvars-7a0d250e-292c-4345-a63f-18eb710a0c7f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:14.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5735" for this suite. • [SLOW TEST:10.398 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:14.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 22 22:19:14.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1324' Jun 22 22:19:14.912: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 22:19:14.912: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Jun 22 22:19:16.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1324' Jun 22 22:19:17.038: INFO: stderr: "" Jun 22 22:19:17.038: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1324" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":266,"skipped":4387,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:17.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 22 22:19:17.295: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:31.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4315" for this suite. • [SLOW TEST:14.268 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":267,"skipped":4404,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:31.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:19:31.556: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8d831d54-5747-46b1-b9ed-7a9c4ec0cc6e", Controller:(*bool)(0xc003635722), BlockOwnerDeletion:(*bool)(0xc003635723)}} Jun 22 22:19:31.618: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"09b1a3ce-a421-4888-8542-e6ab8c8cec34", Controller:(*bool)(0xc003604542), BlockOwnerDeletion:(*bool)(0xc003604543)}} Jun 22 22:19:31.627: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ab609cb8-5f48-4d1a-b3fc-f6c4244b904b", Controller:(*bool)(0xc00365f11a), BlockOwnerDeletion:(*bool)(0xc00365f11b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:36.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5567" for this suite. • [SLOW TEST:5.279 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":268,"skipped":4417,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:36.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:36.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1869" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":269,"skipped":4433,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:36.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4063 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4063 to expose endpoints map[] Jun 22 22:19:36.968: INFO: Get endpoints failed (6.54698ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 22 22:19:37.972: INFO: successfully validated that service endpoint-test2 in namespace services-4063 exposes endpoints map[] (1.01067687s elapsed) STEP: Creating pod pod1 in namespace services-4063 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4063 to expose endpoints map[pod1:[80]] Jun 22 22:19:41.034: INFO: successfully validated that service endpoint-test2 in namespace services-4063 exposes endpoints map[pod1:[80]] (3.05502286s elapsed) STEP: Creating pod pod2 in namespace services-4063 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4063 to expose endpoints map[pod1:[80] pod2:[80]] Jun 22 22:19:45.175: INFO: successfully validated that service endpoint-test2 in namespace services-4063 exposes endpoints map[pod1:[80] pod2:[80]] (4.136710286s elapsed) STEP: Deleting pod pod1 in namespace services-4063 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4063 to expose endpoints map[pod2:[80]] Jun 22 22:19:46.236: INFO: successfully validated that service endpoint-test2 in namespace services-4063 exposes endpoints map[pod2:[80]] (1.055357454s elapsed) STEP: Deleting pod pod2 in namespace services-4063 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4063 to expose endpoints map[] Jun 22 22:19:47.294: INFO: successfully validated that service endpoint-test2 in namespace services-4063 exposes endpoints map[] (1.053275282s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:47.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4063" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.482 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":270,"skipped":4440,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:47.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:19:47.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab" in namespace "projected-9295" to be "success or failure" Jun 22 22:19:47.450: INFO: Pod "downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab": Phase="Pending", Reason="", readiness=false. Elapsed: 67.86063ms Jun 22 22:19:49.558: INFO: Pod "downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176266354s Jun 22 22:19:51.561: INFO: Pod "downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17957013s STEP: Saw pod success Jun 22 22:19:51.561: INFO: Pod "downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab" satisfied condition "success or failure" Jun 22 22:19:51.564: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab container client-container: STEP: delete the pod Jun 22 22:19:51.615: INFO: Waiting for pod downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab to disappear Jun 22 22:19:51.630: INFO: Pod downwardapi-volume-c9c510d8-05ac-4e90-9038-c4a9cee347ab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:19:51.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9295" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4448,"failed":0} ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:19:51.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 22 22:19:55.737: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 22 22:20:00.832: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:00.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3502" for this suite. • [SLOW TEST:9.206 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":272,"skipped":4448,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:00.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jun 22 22:20:00.904: INFO: Waiting up to 5m0s for pod "pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0" in namespace "emptydir-8238" to be "success or failure" Jun 22 22:20:00.947: INFO: Pod "pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.389091ms Jun 22 22:20:02.995: INFO: Pod "pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090972923s Jun 22 22:20:05.000: INFO: Pod "pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095582087s STEP: Saw pod success Jun 22 22:20:05.000: INFO: Pod "pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0" satisfied condition "success or failure" Jun 22 22:20:05.004: INFO: Trying to get logs from node jerma-worker2 pod pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0 container test-container: STEP: delete the pod Jun 22 22:20:05.036: INFO: Waiting for pod pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0 to disappear Jun 22 22:20:05.070: INFO: Pod pod-48334b04-c7f4-4e30-a4fe-6c1ca5cdc6e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:05.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8238" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4461,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:05.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-9cwh STEP: Creating a pod to test atomic-volume-subpath Jun 22 22:20:05.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9cwh" in namespace "subpath-3824" to be "success or failure" Jun 22 22:20:05.236: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.940973ms Jun 22 22:20:07.240: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008362606s Jun 22 22:20:09.243: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 4.011231822s Jun 22 22:20:11.247: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 6.015881566s Jun 22 22:20:13.252: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 8.020303203s Jun 22 22:20:15.256: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 10.024174533s Jun 22 22:20:17.260: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 12.028180081s Jun 22 22:20:19.264: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 14.032304537s Jun 22 22:20:21.267: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 16.035823295s Jun 22 22:20:23.272: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 18.039965471s Jun 22 22:20:25.276: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 20.044117332s Jun 22 22:20:27.280: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Running", Reason="", readiness=true. Elapsed: 22.048411025s Jun 22 22:20:29.284: INFO: Pod "pod-subpath-test-projected-9cwh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052737774s STEP: Saw pod success Jun 22 22:20:29.284: INFO: Pod "pod-subpath-test-projected-9cwh" satisfied condition "success or failure" Jun 22 22:20:29.287: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-9cwh container test-container-subpath-projected-9cwh: STEP: delete the pod Jun 22 22:20:29.327: INFO: Waiting for pod pod-subpath-test-projected-9cwh to disappear Jun 22 22:20:29.360: INFO: Pod pod-subpath-test-projected-9cwh no longer exists STEP: Deleting pod pod-subpath-test-projected-9cwh Jun 22 22:20:29.360: INFO: Deleting pod "pod-subpath-test-projected-9cwh" in namespace "subpath-3824" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:29.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3824" for this suite. • [SLOW TEST:24.318 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":274,"skipped":4467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:29.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 22 22:20:29.800: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 22 22:20:31.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 22 22:20:34.840: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 22 22:20:34.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:36.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5002" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.894 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":275,"skipped":4500,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:36.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-31a6e32b-d180-4c05-b200-d78e6d361e78 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:36.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7323" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":276,"skipped":4518,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:36.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 22 22:20:36.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3" in namespace "downward-api-5624" to be "success or failure" Jun 22 22:20:36.420: INFO: Pod "downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.336038ms Jun 22 22:20:38.424: INFO: Pod "downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016665581s Jun 22 22:20:40.429: INFO: Pod "downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02144565s STEP: Saw pod success Jun 22 22:20:40.429: INFO: Pod "downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3" satisfied condition "success or failure" Jun 22 22:20:40.431: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3 container client-container: STEP: delete the pod Jun 22 22:20:40.465: INFO: Waiting for pod downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3 to disappear Jun 22 22:20:40.478: INFO: Pod downwardapi-volume-e156eaea-ab60-4caf-b75e-2ce7cd2d68c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:40.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5624" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 22 22:20:40.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 22 22:20:40.554: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jun 22 22:20:40.916: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 22 22:20:43.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:20:45.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728461240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 22:20:47.535: INFO: Waited 519.981396ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 22 22:20:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4519" for this suite. • [SLOW TEST:7.858 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":278,"skipped":4560,"failed":0} SSSSJun 22 22:20:48.345: INFO: Running AfterSuite actions on all nodes Jun 22 22:20:48.345: INFO: Running AfterSuite actions on node 1 Jun 22 22:20:48.345: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4306.588 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS