I0816 23:20:32.561654 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0816 23:20:32.561846 7 e2e.go:129] Starting e2e run "465b7e17-0d61-4e7d-ade5-cffdb9c07cf9" on Ginkgo node 1 {"msg":"Test Suite starting","total":294,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597620031 - Will randomize all specs Will run 294 of 5214 specs Aug 16 23:20:32.616: INFO: >>> kubeConfig: /root/.kube/config Aug 16 23:20:32.621: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 16 23:20:32.642: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 16 23:20:32.676: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 16 23:20:32.676: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 16 23:20:32.676: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 16 23:20:32.686: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 16 23:20:32.686: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 16 23:20:32.686: INFO: e2e test version: v1.20.0-alpha.0 Aug 16 23:20:32.687: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 16 23:20:32.687: INFO: >>> kubeConfig: /root/.kube/config Aug 16 23:20:32.691: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:20:32.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Aug 16 23:20:32.773: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5518 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5518 Aug 16 23:20:32.810: INFO: Found 0 stateful pods, waiting for 1 Aug 16 23:20:42.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 16 23:20:42.861: INFO: Deleting all statefulset in ns statefulset-5518 Aug 16 23:20:42.878: INFO: Scaling statefulset ss to 0 Aug 16 23:21:03.346: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:21:03.753: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:21:04.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5518" for this suite. • [SLOW TEST:31.438 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":294,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:21:04.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 16 23:21:26.553: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:26.553: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:26.599743 7 log.go:181] (0xc00398c2c0) (0xc00257eb40) Create stream I0816 23:21:26.599774 7 log.go:181] (0xc00398c2c0) (0xc00257eb40) Stream added, broadcasting: 1 I0816 23:21:26.602803 7 log.go:181] (0xc00398c2c0) Reply frame received for 1 I0816 23:21:26.602856 7 log.go:181] (0xc00398c2c0) (0xc00127f9a0) Create stream I0816 23:21:26.602873 7 log.go:181] (0xc00398c2c0) (0xc00127f9a0) Stream added, broadcasting: 3 I0816 23:21:26.603900 7 log.go:181] (0xc00398c2c0) Reply frame received for 3 I0816 23:21:26.603945 7 log.go:181] (0xc00398c2c0) (0xc000aea6e0) Create stream I0816 23:21:26.603961 7 log.go:181] (0xc00398c2c0) (0xc000aea6e0) Stream added, broadcasting: 5 I0816 23:21:26.605188 7 log.go:181] (0xc00398c2c0) Reply frame received for 5 I0816 23:21:26.653982 7 log.go:181] (0xc00398c2c0) Data frame received for 5 I0816 23:21:26.654018 7 log.go:181] (0xc000aea6e0) (5) Data frame handling I0816 23:21:26.654041 7 log.go:181] (0xc00398c2c0) Data frame received for 3 I0816 23:21:26.654053 7 log.go:181] (0xc00127f9a0) (3) Data frame handling I0816 23:21:26.654067 7 log.go:181] (0xc00127f9a0) (3) Data frame sent I0816 23:21:26.654079 7 log.go:181] (0xc00398c2c0) Data frame received for 3 I0816 23:21:26.654090 7 log.go:181] (0xc00127f9a0) (3) Data frame handling I0816 23:21:26.655567 7 log.go:181] (0xc00398c2c0) Data frame received for 1 I0816 23:21:26.655610 7 log.go:181] (0xc00257eb40) (1) Data frame handling I0816 23:21:26.655625 7 log.go:181] (0xc00257eb40) (1) Data frame sent I0816 23:21:26.655645 7 log.go:181] (0xc00398c2c0) (0xc00257eb40) Stream removed, broadcasting: 1 I0816 23:21:26.655659 7 log.go:181] (0xc00398c2c0) Go away received I0816 23:21:26.656236 7 log.go:181] (0xc00398c2c0) (0xc00257eb40) Stream removed, broadcasting: 1 I0816 23:21:26.656256 7 log.go:181] (0xc00398c2c0) (0xc00127f9a0) Stream removed, broadcasting: 3 I0816 23:21:26.656266 7 log.go:181] (0xc00398c2c0) (0xc000aea6e0) Stream removed, broadcasting: 5 Aug 16 23:21:26.656: INFO: Exec stderr: "" Aug 16 23:21:26.656: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:26.656: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:26.753771 7 log.go:181] (0xc00398c630) (0xc00257edc0) Create stream I0816 23:21:26.753797 7 log.go:181] (0xc00398c630) (0xc00257edc0) Stream added, broadcasting: 1 I0816 23:21:26.756621 7 log.go:181] (0xc00398c630) Reply frame received for 1 I0816 23:21:26.756671 7 log.go:181] (0xc00398c630) (0xc002816000) Create stream I0816 23:21:26.756692 7 log.go:181] (0xc00398c630) (0xc002816000) Stream added, broadcasting: 3 I0816 23:21:26.757935 7 log.go:181] (0xc00398c630) Reply frame received for 3 I0816 23:21:26.757954 7 log.go:181] (0xc00398c630) (0xc000aea8c0) Create stream I0816 23:21:26.757963 7 log.go:181] (0xc00398c630) (0xc000aea8c0) Stream added, broadcasting: 5 I0816 23:21:26.758946 7 log.go:181] (0xc00398c630) Reply frame received for 5 I0816 23:21:26.820618 7 log.go:181] (0xc00398c630) Data frame received for 5 I0816 23:21:26.820702 7 log.go:181] (0xc000aea8c0) (5) Data frame handling I0816 23:21:26.820851 7 log.go:181] (0xc00398c630) Data frame received for 3 I0816 23:21:26.820876 7 log.go:181] (0xc002816000) (3) Data frame handling I0816 23:21:26.820885 7 log.go:181] (0xc002816000) (3) Data frame sent I0816 23:21:26.820891 7 log.go:181] (0xc00398c630) Data frame received for 3 I0816 23:21:26.820897 7 log.go:181] (0xc002816000) (3) Data frame handling I0816 23:21:26.822281 7 log.go:181] (0xc00398c630) Data frame received for 1 I0816 23:21:26.822300 7 log.go:181] (0xc00257edc0) (1) Data frame handling I0816 23:21:26.822325 7 log.go:181] (0xc00257edc0) (1) Data frame sent I0816 23:21:26.822483 7 log.go:181] (0xc00398c630) (0xc00257edc0) Stream removed, broadcasting: 1 I0816 23:21:26.822581 7 log.go:181] (0xc00398c630) (0xc00257edc0) Stream removed, broadcasting: 1 I0816 23:21:26.822590 7 log.go:181] (0xc00398c630) (0xc002816000) Stream removed, broadcasting: 3 I0816 23:21:26.822599 7 log.go:181] (0xc00398c630) (0xc000aea8c0) Stream removed, broadcasting: 5 Aug 16 23:21:26.822: INFO: Exec stderr: "" I0816 23:21:26.822621 7 log.go:181] (0xc00398c630) Go away received Aug 16 23:21:26.822: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:26.822: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.206355 7 log.go:181] (0xc003bea370) (0xc002816500) Create stream I0816 23:21:27.206386 7 log.go:181] (0xc003bea370) (0xc002816500) Stream added, broadcasting: 1 I0816 23:21:27.210770 7 log.go:181] (0xc003bea370) Reply frame received for 1 I0816 23:21:27.210818 7 log.go:181] (0xc003bea370) (0xc000aea960) Create stream I0816 23:21:27.210835 7 log.go:181] (0xc003bea370) (0xc000aea960) Stream added, broadcasting: 3 I0816 23:21:27.211903 7 log.go:181] (0xc003bea370) Reply frame received for 3 I0816 23:21:27.211953 7 log.go:181] (0xc003bea370) (0xc001c7c000) Create stream I0816 23:21:27.211981 7 log.go:181] (0xc003bea370) (0xc001c7c000) Stream added, broadcasting: 5 I0816 23:21:27.213370 7 log.go:181] (0xc003bea370) Reply frame received for 5 I0816 23:21:27.277600 7 log.go:181] (0xc003bea370) Data frame received for 3 I0816 23:21:27.277641 7 log.go:181] (0xc000aea960) (3) Data frame handling I0816 23:21:27.277655 7 log.go:181] (0xc000aea960) (3) Data frame sent I0816 23:21:27.277663 7 log.go:181] (0xc003bea370) Data frame received for 3 I0816 23:21:27.277669 7 log.go:181] (0xc000aea960) (3) Data frame handling I0816 23:21:27.277708 7 log.go:181] (0xc003bea370) Data frame received for 5 I0816 23:21:27.277728 7 log.go:181] (0xc001c7c000) (5) Data frame handling I0816 23:21:27.278939 7 log.go:181] (0xc003bea370) Data frame received for 1 I0816 23:21:27.278957 7 log.go:181] (0xc002816500) (1) Data frame handling I0816 23:21:27.278970 7 log.go:181] (0xc002816500) (1) Data frame sent I0816 23:21:27.278985 7 log.go:181] (0xc003bea370) (0xc002816500) Stream removed, broadcasting: 1 I0816 23:21:27.279031 7 log.go:181] (0xc003bea370) Go away received I0816 23:21:27.279083 7 log.go:181] (0xc003bea370) (0xc002816500) Stream removed, broadcasting: 1 I0816 23:21:27.279130 7 log.go:181] (0xc003bea370) (0xc000aea960) Stream removed, broadcasting: 3 I0816 23:21:27.279146 7 log.go:181] (0xc003bea370) (0xc001c7c000) Stream removed, broadcasting: 5 Aug 16 23:21:27.279: INFO: Exec stderr: "" Aug 16 23:21:27.279: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.279: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.464210 7 log.go:181] (0xc003383760) (0xc002130fa0) Create stream I0816 23:21:27.464242 7 log.go:181] (0xc003383760) (0xc002130fa0) Stream added, broadcasting: 1 I0816 23:21:27.466422 7 log.go:181] (0xc003383760) Reply frame received for 1 I0816 23:21:27.466450 7 log.go:181] (0xc003383760) (0xc001f20140) Create stream I0816 23:21:27.466456 7 log.go:181] (0xc003383760) (0xc001f20140) Stream added, broadcasting: 3 I0816 23:21:27.467294 7 log.go:181] (0xc003383760) Reply frame received for 3 I0816 23:21:27.467345 7 log.go:181] (0xc003383760) (0xc0021310e0) Create stream I0816 23:21:27.467369 7 log.go:181] (0xc003383760) (0xc0021310e0) Stream added, broadcasting: 5 I0816 23:21:27.468659 7 log.go:181] (0xc003383760) Reply frame received for 5 I0816 23:21:27.535440 7 log.go:181] (0xc003383760) Data frame received for 5 I0816 23:21:27.535464 7 log.go:181] (0xc0021310e0) (5) Data frame handling I0816 23:21:27.535495 7 log.go:181] (0xc003383760) Data frame received for 3 I0816 23:21:27.535521 7 log.go:181] (0xc001f20140) (3) Data frame handling I0816 23:21:27.535535 7 log.go:181] (0xc001f20140) (3) Data frame sent I0816 23:21:27.535540 7 log.go:181] (0xc003383760) Data frame received for 3 I0816 23:21:27.535551 7 log.go:181] (0xc001f20140) (3) Data frame handling I0816 23:21:27.536595 7 log.go:181] (0xc003383760) Data frame received for 1 I0816 23:21:27.536621 7 log.go:181] (0xc002130fa0) (1) Data frame handling I0816 23:21:27.536643 7 log.go:181] (0xc002130fa0) (1) Data frame sent I0816 23:21:27.536660 7 log.go:181] (0xc003383760) (0xc002130fa0) Stream removed, broadcasting: 1 I0816 23:21:27.536856 7 log.go:181] (0xc003383760) Go away received I0816 23:21:27.536912 7 log.go:181] (0xc003383760) (0xc002130fa0) Stream removed, broadcasting: 1 I0816 23:21:27.536932 7 log.go:181] (0xc003383760) (0xc001f20140) Stream removed, broadcasting: 3 I0816 23:21:27.536949 7 log.go:181] (0xc003383760) (0xc0021310e0) Stream removed, broadcasting: 5 Aug 16 23:21:27.536: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 16 23:21:27.536: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.537: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.561219 7 log.go:181] (0xc0029520b0) (0xc001b326e0) Create stream I0816 23:21:27.561240 7 log.go:181] (0xc0029520b0) (0xc001b326e0) Stream added, broadcasting: 1 I0816 23:21:27.563340 7 log.go:181] (0xc0029520b0) Reply frame received for 1 I0816 23:21:27.563402 7 log.go:181] (0xc0029520b0) (0xc0009c6000) Create stream I0816 23:21:27.563415 7 log.go:181] (0xc0029520b0) (0xc0009c6000) Stream added, broadcasting: 3 I0816 23:21:27.564266 7 log.go:181] (0xc0029520b0) Reply frame received for 3 I0816 23:21:27.564301 7 log.go:181] (0xc0029520b0) (0xc00101a0a0) Create stream I0816 23:21:27.564313 7 log.go:181] (0xc0029520b0) (0xc00101a0a0) Stream added, broadcasting: 5 I0816 23:21:27.565240 7 log.go:181] (0xc0029520b0) Reply frame received for 5 I0816 23:21:27.632657 7 log.go:181] (0xc0029520b0) Data frame received for 5 I0816 23:21:27.632716 7 log.go:181] (0xc0029520b0) Data frame received for 3 I0816 23:21:27.632891 7 log.go:181] (0xc0009c6000) (3) Data frame handling I0816 23:21:27.632918 7 log.go:181] (0xc0009c6000) (3) Data frame sent I0816 23:21:27.632939 7 log.go:181] (0xc0029520b0) Data frame received for 3 I0816 23:21:27.632958 7 log.go:181] (0xc00101a0a0) (5) Data frame handling I0816 23:21:27.633033 7 log.go:181] (0xc0009c6000) (3) Data frame handling I0816 23:21:27.634277 7 log.go:181] (0xc0029520b0) Data frame received for 1 I0816 23:21:27.634297 7 log.go:181] (0xc001b326e0) (1) Data frame handling I0816 23:21:27.634311 7 log.go:181] (0xc001b326e0) (1) Data frame sent I0816 23:21:27.634332 7 log.go:181] (0xc0029520b0) (0xc001b326e0) Stream removed, broadcasting: 1 I0816 23:21:27.634421 7 log.go:181] (0xc0029520b0) Go away received I0816 23:21:27.634475 7 log.go:181] (0xc0029520b0) (0xc001b326e0) Stream removed, broadcasting: 1 I0816 23:21:27.634496 7 log.go:181] (0xc0029520b0) (0xc0009c6000) Stream removed, broadcasting: 3 I0816 23:21:27.634509 7 log.go:181] (0xc0029520b0) (0xc00101a0a0) Stream removed, broadcasting: 5 Aug 16 23:21:27.634: INFO: Exec stderr: "" Aug 16 23:21:27.634: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.634: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.683464 7 log.go:181] (0xc0029526e0) (0xc0009c66e0) Create stream I0816 23:21:27.683504 7 log.go:181] (0xc0029526e0) (0xc0009c66e0) Stream added, broadcasting: 1 I0816 23:21:27.686608 7 log.go:181] (0xc0029526e0) Reply frame received for 1 I0816 23:21:27.686640 7 log.go:181] (0xc0029526e0) (0xc0009c68c0) Create stream I0816 23:21:27.686652 7 log.go:181] (0xc0029526e0) (0xc0009c68c0) Stream added, broadcasting: 3 I0816 23:21:27.687541 7 log.go:181] (0xc0029526e0) Reply frame received for 3 I0816 23:21:27.687578 7 log.go:181] (0xc0029526e0) (0xc00101a1e0) Create stream I0816 23:21:27.687593 7 log.go:181] (0xc0029526e0) (0xc00101a1e0) Stream added, broadcasting: 5 I0816 23:21:27.688646 7 log.go:181] (0xc0029526e0) Reply frame received for 5 I0816 23:21:27.746255 7 log.go:181] (0xc0029526e0) Data frame received for 5 I0816 23:21:27.746308 7 log.go:181] (0xc00101a1e0) (5) Data frame handling I0816 23:21:27.746352 7 log.go:181] (0xc0029526e0) Data frame received for 3 I0816 23:21:27.746385 7 log.go:181] (0xc0009c68c0) (3) Data frame handling I0816 23:21:27.746427 7 log.go:181] (0xc0009c68c0) (3) Data frame sent I0816 23:21:27.746443 7 log.go:181] (0xc0029526e0) Data frame received for 3 I0816 23:21:27.746456 7 log.go:181] (0xc0009c68c0) (3) Data frame handling I0816 23:21:27.748096 7 log.go:181] (0xc0029526e0) Data frame received for 1 I0816 23:21:27.748138 7 log.go:181] (0xc0009c66e0) (1) Data frame handling I0816 23:21:27.748153 7 log.go:181] (0xc0009c66e0) (1) Data frame sent I0816 23:21:27.748172 7 log.go:181] (0xc0029526e0) (0xc0009c66e0) Stream removed, broadcasting: 1 I0816 23:21:27.748219 7 log.go:181] (0xc0029526e0) Go away received I0816 23:21:27.748258 7 log.go:181] (0xc0029526e0) (0xc0009c66e0) Stream removed, broadcasting: 1 I0816 23:21:27.748281 7 log.go:181] (0xc0029526e0) (0xc0009c68c0) Stream removed, broadcasting: 3 I0816 23:21:27.748299 7 log.go:181] (0xc0029526e0) (0xc00101a1e0) Stream removed, broadcasting: 5 Aug 16 23:21:27.748: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 16 23:21:27.748: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.748: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.778864 7 log.go:181] (0xc002fa8420) (0xc00101abe0) Create stream I0816 23:21:27.778887 7 log.go:181] (0xc002fa8420) (0xc00101abe0) Stream added, broadcasting: 1 I0816 23:21:27.781277 7 log.go:181] (0xc002fa8420) Reply frame received for 1 I0816 23:21:27.781306 7 log.go:181] (0xc002fa8420) (0xc002131220) Create stream I0816 23:21:27.781317 7 log.go:181] (0xc002fa8420) (0xc002131220) Stream added, broadcasting: 3 I0816 23:21:27.782162 7 log.go:181] (0xc002fa8420) Reply frame received for 3 I0816 23:21:27.782243 7 log.go:181] (0xc002fa8420) (0xc000fbdcc0) Create stream I0816 23:21:27.782276 7 log.go:181] (0xc002fa8420) (0xc000fbdcc0) Stream added, broadcasting: 5 I0816 23:21:27.783557 7 log.go:181] (0xc002fa8420) Reply frame received for 5 I0816 23:21:27.858834 7 log.go:181] (0xc002fa8420) Data frame received for 5 I0816 23:21:27.858858 7 log.go:181] (0xc000fbdcc0) (5) Data frame handling I0816 23:21:27.858922 7 log.go:181] (0xc002fa8420) Data frame received for 3 I0816 23:21:27.858957 7 log.go:181] (0xc002131220) (3) Data frame handling I0816 23:21:27.858977 7 log.go:181] (0xc002131220) (3) Data frame sent I0816 23:21:27.858989 7 log.go:181] (0xc002fa8420) Data frame received for 3 I0816 23:21:27.858999 7 log.go:181] (0xc002131220) (3) Data frame handling I0816 23:21:27.860331 7 log.go:181] (0xc002fa8420) Data frame received for 1 I0816 23:21:27.860352 7 log.go:181] (0xc00101abe0) (1) Data frame handling I0816 23:21:27.860365 7 log.go:181] (0xc00101abe0) (1) Data frame sent I0816 23:21:27.860388 7 log.go:181] (0xc002fa8420) (0xc00101abe0) Stream removed, broadcasting: 1 I0816 23:21:27.860423 7 log.go:181] (0xc002fa8420) Go away received I0816 23:21:27.860500 7 log.go:181] (0xc002fa8420) (0xc00101abe0) Stream removed, broadcasting: 1 I0816 23:21:27.860527 7 log.go:181] (0xc002fa8420) (0xc002131220) Stream removed, broadcasting: 3 I0816 23:21:27.860540 7 log.go:181] (0xc002fa8420) (0xc000fbdcc0) Stream removed, broadcasting: 5 Aug 16 23:21:27.860: INFO: Exec stderr: "" Aug 16 23:21:27.860: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.860: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.887958 7 log.go:181] (0xc003383e40) (0xc002131900) Create stream I0816 23:21:27.887988 7 log.go:181] (0xc003383e40) (0xc002131900) Stream added, broadcasting: 1 I0816 23:21:27.890913 7 log.go:181] (0xc003383e40) Reply frame received for 1 I0816 23:21:27.890965 7 log.go:181] (0xc003383e40) (0xc001f20280) Create stream I0816 23:21:27.890982 7 log.go:181] (0xc003383e40) (0xc001f20280) Stream added, broadcasting: 3 I0816 23:21:27.891789 7 log.go:181] (0xc003383e40) Reply frame received for 3 I0816 23:21:27.891834 7 log.go:181] (0xc003383e40) (0xc00092c000) Create stream I0816 23:21:27.891849 7 log.go:181] (0xc003383e40) (0xc00092c000) Stream added, broadcasting: 5 I0816 23:21:27.892705 7 log.go:181] (0xc003383e40) Reply frame received for 5 I0816 23:21:27.958841 7 log.go:181] (0xc003383e40) Data frame received for 3 I0816 23:21:27.958872 7 log.go:181] (0xc001f20280) (3) Data frame handling I0816 23:21:27.958886 7 log.go:181] (0xc001f20280) (3) Data frame sent I0816 23:21:27.958892 7 log.go:181] (0xc003383e40) Data frame received for 3 I0816 23:21:27.958897 7 log.go:181] (0xc001f20280) (3) Data frame handling I0816 23:21:27.959065 7 log.go:181] (0xc003383e40) Data frame received for 5 I0816 23:21:27.959087 7 log.go:181] (0xc00092c000) (5) Data frame handling I0816 23:21:27.960401 7 log.go:181] (0xc003383e40) Data frame received for 1 I0816 23:21:27.960462 7 log.go:181] (0xc002131900) (1) Data frame handling I0816 23:21:27.960493 7 log.go:181] (0xc002131900) (1) Data frame sent I0816 23:21:27.960513 7 log.go:181] (0xc003383e40) (0xc002131900) Stream removed, broadcasting: 1 I0816 23:21:27.960540 7 log.go:181] (0xc003383e40) Go away received I0816 23:21:27.960648 7 log.go:181] (0xc003383e40) (0xc002131900) Stream removed, broadcasting: 1 I0816 23:21:27.960666 7 log.go:181] (0xc003383e40) (0xc001f20280) Stream removed, broadcasting: 3 I0816 23:21:27.960679 7 log.go:181] (0xc003383e40) (0xc00092c000) Stream removed, broadcasting: 5 Aug 16 23:21:27.960: INFO: Exec stderr: "" Aug 16 23:21:27.960: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:27.960: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:27.986433 7 log.go:181] (0xc003bea580) (0xc002131e00) Create stream I0816 23:21:27.986458 7 log.go:181] (0xc003bea580) (0xc002131e00) Stream added, broadcasting: 1 I0816 23:21:27.988967 7 log.go:181] (0xc003bea580) Reply frame received for 1 I0816 23:21:27.989004 7 log.go:181] (0xc003bea580) (0xc0009c6960) Create stream I0816 23:21:27.989017 7 log.go:181] (0xc003bea580) (0xc0009c6960) Stream added, broadcasting: 3 I0816 23:21:27.989829 7 log.go:181] (0xc003bea580) Reply frame received for 3 I0816 23:21:27.989861 7 log.go:181] (0xc003bea580) (0xc001f20320) Create stream I0816 23:21:27.989873 7 log.go:181] (0xc003bea580) (0xc001f20320) Stream added, broadcasting: 5 I0816 23:21:27.990613 7 log.go:181] (0xc003bea580) Reply frame received for 5 I0816 23:21:28.049849 7 log.go:181] (0xc003bea580) Data frame received for 5 I0816 23:21:28.049891 7 log.go:181] (0xc001f20320) (5) Data frame handling I0816 23:21:28.049911 7 log.go:181] (0xc003bea580) Data frame received for 3 I0816 23:21:28.049920 7 log.go:181] (0xc0009c6960) (3) Data frame handling I0816 23:21:28.049928 7 log.go:181] (0xc0009c6960) (3) Data frame sent I0816 23:21:28.049937 7 log.go:181] (0xc003bea580) Data frame received for 3 I0816 23:21:28.049947 7 log.go:181] (0xc0009c6960) (3) Data frame handling I0816 23:21:28.051283 7 log.go:181] (0xc003bea580) Data frame received for 1 I0816 23:21:28.051297 7 log.go:181] (0xc002131e00) (1) Data frame handling I0816 23:21:28.051315 7 log.go:181] (0xc002131e00) (1) Data frame sent I0816 23:21:28.051334 7 log.go:181] (0xc003bea580) (0xc002131e00) Stream removed, broadcasting: 1 I0816 23:21:28.051383 7 log.go:181] (0xc003bea580) Go away received I0816 23:21:28.051434 7 log.go:181] (0xc003bea580) (0xc002131e00) Stream removed, broadcasting: 1 I0816 23:21:28.051455 7 log.go:181] (0xc003bea580) (0xc0009c6960) Stream removed, broadcasting: 3 I0816 23:21:28.051470 7 log.go:181] (0xc003bea580) (0xc001f20320) Stream removed, broadcasting: 5 Aug 16 23:21:28.051: INFO: Exec stderr: "" Aug 16 23:21:28.051: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5894 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:21:28.051: INFO: >>> kubeConfig: /root/.kube/config I0816 23:21:28.131793 7 log.go:181] (0xc002952d10) (0xc0009c6d20) Create stream I0816 23:21:28.131827 7 log.go:181] (0xc002952d10) (0xc0009c6d20) Stream added, broadcasting: 1 I0816 23:21:28.141645 7 log.go:181] (0xc002952d10) Reply frame received for 1 I0816 23:21:28.141702 7 log.go:181] (0xc002952d10) (0xc001f20500) Create stream I0816 23:21:28.141722 7 log.go:181] (0xc002952d10) (0xc001f20500) Stream added, broadcasting: 3 I0816 23:21:28.143096 7 log.go:181] (0xc002952d10) Reply frame received for 3 I0816 23:21:28.143144 7 log.go:181] (0xc002952d10) (0xc001f20640) Create stream I0816 23:21:28.143168 7 log.go:181] (0xc002952d10) (0xc001f20640) Stream added, broadcasting: 5 I0816 23:21:28.145493 7 log.go:181] (0xc002952d10) Reply frame received for 5 I0816 23:21:28.196008 7 log.go:181] (0xc002952d10) Data frame received for 3 I0816 23:21:28.196030 7 log.go:181] (0xc001f20500) (3) Data frame handling I0816 23:21:28.196038 7 log.go:181] (0xc001f20500) (3) Data frame sent I0816 23:21:28.196044 7 log.go:181] (0xc002952d10) Data frame received for 3 I0816 23:21:28.196049 7 log.go:181] (0xc001f20500) (3) Data frame handling I0816 23:21:28.196069 7 log.go:181] (0xc002952d10) Data frame received for 5 I0816 23:21:28.196076 7 log.go:181] (0xc001f20640) (5) Data frame handling I0816 23:21:28.197390 7 log.go:181] (0xc002952d10) Data frame received for 1 I0816 23:21:28.197412 7 log.go:181] (0xc0009c6d20) (1) Data frame handling I0816 23:21:28.197426 7 log.go:181] (0xc0009c6d20) (1) Data frame sent I0816 23:21:28.197443 7 log.go:181] (0xc002952d10) (0xc0009c6d20) Stream removed, broadcasting: 1 I0816 23:21:28.197463 7 log.go:181] (0xc002952d10) Go away received I0816 23:21:28.197556 7 log.go:181] (0xc002952d10) (0xc0009c6d20) Stream removed, broadcasting: 1 I0816 23:21:28.197578 7 log.go:181] (0xc002952d10) (0xc001f20500) Stream removed, broadcasting: 3 I0816 23:21:28.197591 7 log.go:181] (0xc002952d10) (0xc001f20640) Stream removed, broadcasting: 5 Aug 16 23:21:28.197: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:21:28.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5894" for this suite. • [SLOW TEST:24.076 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":2,"skipped":17,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:21:28.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 16 23:21:28.440: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:28.469: INFO: Number of nodes with available pods: 0 Aug 16 23:21:28.470: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:29.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:29.480: INFO: Number of nodes with available pods: 0 Aug 16 23:21:29.480: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:30.911: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:30.962: INFO: Number of nodes with available pods: 0 Aug 16 23:21:30.962: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:31.670: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:31.673: INFO: Number of nodes with available pods: 0 Aug 16 23:21:31.673: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:32.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:32.763: INFO: Number of nodes with available pods: 0 Aug 16 23:21:32.763: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:33.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:33.958: INFO: Number of nodes with available pods: 0 Aug 16 23:21:33.958: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:35.217: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:35.650: INFO: Number of nodes with available pods: 0 Aug 16 23:21:35.650: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:36.491: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:37.018: INFO: Number of nodes with available pods: 0 Aug 16 23:21:37.018: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:37.530: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:37.732: INFO: Number of nodes with available pods: 0 Aug 16 23:21:37.732: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:38.685: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:38.689: INFO: Number of nodes with available pods: 1 Aug 16 23:21:38.689: INFO: Node latest-worker is running more than one daemon pod Aug 16 23:21:39.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:40.025: INFO: Number of nodes with available pods: 2 Aug 16 23:21:40.025: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 16 23:21:40.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:40.605: INFO: Number of nodes with available pods: 1 Aug 16 23:21:40.605: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:41.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:41.951: INFO: Number of nodes with available pods: 1 Aug 16 23:21:41.951: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:42.609: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:42.612: INFO: Number of nodes with available pods: 1 Aug 16 23:21:42.612: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:43.614: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:43.618: INFO: Number of nodes with available pods: 1 Aug 16 23:21:43.618: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:44.716: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:44.746: INFO: Number of nodes with available pods: 1 Aug 16 23:21:44.746: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:45.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:45.613: INFO: Number of nodes with available pods: 1 Aug 16 23:21:45.613: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:47.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:47.156: INFO: Number of nodes with available pods: 1 Aug 16 23:21:47.156: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:47.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:47.614: INFO: Number of nodes with available pods: 1 Aug 16 23:21:47.614: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:48.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:48.612: INFO: Number of nodes with available pods: 1 Aug 16 23:21:48.612: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:49.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:49.971: INFO: Number of nodes with available pods: 1 Aug 16 23:21:49.971: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:51.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:51.628: INFO: Number of nodes with available pods: 1 Aug 16 23:21:51.628: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:52.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:52.645: INFO: Number of nodes with available pods: 1 Aug 16 23:21:52.645: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:53.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:53.821: INFO: Number of nodes with available pods: 1 Aug 16 23:21:53.821: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:54.713: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:54.786: INFO: Number of nodes with available pods: 1 Aug 16 23:21:54.786: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:55.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:56.256: INFO: Number of nodes with available pods: 1 Aug 16 23:21:56.256: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:56.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:56.887: INFO: Number of nodes with available pods: 1 Aug 16 23:21:56.887: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:58.022: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:58.142: INFO: Number of nodes with available pods: 1 Aug 16 23:21:58.142: INFO: Node latest-worker2 is running more than one daemon pod Aug 16 23:21:58.854: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 23:21:58.901: INFO: Number of nodes with available pods: 2 Aug 16 23:21:58.901: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4587, will wait for the garbage collector to delete the pods Aug 16 23:21:59.026: INFO: Deleting DaemonSet.extensions daemon-set took: 70.299395ms Aug 16 23:21:59.226: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.201481ms Aug 16 23:22:10.336: INFO: Number of nodes with available pods: 0 Aug 16 23:22:10.336: INFO: Number of running nodes: 0, number of available pods: 0 Aug 16 23:22:10.342: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4587/daemonsets","resourceVersion":"525181"},"items":null} Aug 16 23:22:10.344: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4587/pods","resourceVersion":"525181"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:22:10.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4587" for this suite. • [SLOW TEST:42.299 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":294,"completed":3,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:22:10.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:22:10.572: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:22:21.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4403" for this suite. • [SLOW TEST:11.299 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":294,"completed":4,"skipped":55,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:22:21.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:22:24.246: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:22:26.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216943, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:22:29.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216944, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733216943, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:22:31.835: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:22:31.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:22:34.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8970" for this suite. STEP: Destroying namespace "webhook-8970-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.526 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":294,"completed":5,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:22:34.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 16 23:22:42.436: INFO: Pod pod-hostip-74f99e2f-9784-434c-abad-2228f77dccf0 has hostIP: 172.18.0.14 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:22:42.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5492" for this suite. • [SLOW TEST:8.114 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":294,"completed":6,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:22:42.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 16 23:22:42.869: INFO: Waiting up to 5m0s for pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9" in namespace "emptydir-8470" to be "Succeeded or Failed" Aug 16 23:22:43.110: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9": Phase="Pending", Reason="", readiness=false. Elapsed: 241.416563ms Aug 16 23:22:45.114: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245177602s Aug 16 23:22:47.135: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266450128s Aug 16 23:22:49.526: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9": Phase="Running", Reason="", readiness=true. Elapsed: 6.657201024s Aug 16 23:22:51.529: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.659738095s STEP: Saw pod success Aug 16 23:22:51.529: INFO: Pod "pod-449ffea9-6982-42f8-ab10-6c25cf4199c9" satisfied condition "Succeeded or Failed" Aug 16 23:22:51.530: INFO: Trying to get logs from node latest-worker2 pod pod-449ffea9-6982-42f8-ab10-6c25cf4199c9 container test-container: STEP: delete the pod Aug 16 23:22:51.593: INFO: Waiting for pod pod-449ffea9-6982-42f8-ab10-6c25cf4199c9 to disappear Aug 16 23:22:51.595: INFO: Pod pod-449ffea9-6982-42f8-ab10-6c25cf4199c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:22:51.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8470" for this suite. • [SLOW TEST:9.158 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":7,"skipped":161,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:22:51.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2325 STEP: creating service affinity-clusterip in namespace services-2325 STEP: creating replication controller affinity-clusterip in namespace services-2325 I0816 23:22:51.748300 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2325, replica count: 3 I0816 23:22:54.798601 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:22:57.798781 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:23:00.798965 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 23:23:00.802: INFO: Creating new exec pod Aug 16 23:23:05.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2325 execpod-affinityw46qj -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 16 23:23:11.263: INFO: stderr: "I0816 23:23:10.252335 27 log.go:181] (0xc00003b290) (0xc000a795e0) Create stream\nI0816 23:23:10.252380 27 log.go:181] (0xc00003b290) (0xc000a795e0) Stream added, broadcasting: 1\nI0816 23:23:10.253691 27 log.go:181] (0xc00003b290) Reply frame received for 1\nI0816 23:23:10.253722 27 log.go:181] (0xc00003b290) (0xc0008b8c80) Create stream\nI0816 23:23:10.253731 27 log.go:181] (0xc00003b290) (0xc0008b8c80) Stream added, broadcasting: 3\nI0816 23:23:10.254338 27 log.go:181] (0xc00003b290) Reply frame received for 3\nI0816 23:23:10.254397 27 log.go:181] (0xc00003b290) (0xc000850a00) Create stream\nI0816 23:23:10.254427 27 log.go:181] (0xc00003b290) (0xc000850a00) Stream added, broadcasting: 5\nI0816 23:23:10.255021 27 log.go:181] (0xc00003b290) Reply frame received for 5\nI0816 23:23:10.309888 27 log.go:181] (0xc00003b290) Data frame received for 5\nI0816 23:23:10.309904 27 log.go:181] (0xc000850a00) (5) Data frame handling\nI0816 23:23:10.309915 27 log.go:181] (0xc000850a00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0816 23:23:11.252356 27 log.go:181] (0xc00003b290) Data frame received for 5\nI0816 23:23:11.252387 27 log.go:181] (0xc000850a00) (5) Data frame handling\nI0816 23:23:11.252410 27 log.go:181] (0xc000850a00) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0816 23:23:11.253085 27 log.go:181] (0xc00003b290) Data frame received for 5\nI0816 23:23:11.253126 27 log.go:181] (0xc000850a00) (5) Data frame handling\nI0816 23:23:11.253168 27 log.go:181] (0xc00003b290) Data frame received for 3\nI0816 23:23:11.253191 27 log.go:181] (0xc0008b8c80) (3) Data frame handling\nI0816 23:23:11.255139 27 log.go:181] (0xc00003b290) Data frame received for 1\nI0816 23:23:11.255177 27 log.go:181] (0xc000a795e0) (1) Data frame handling\nI0816 23:23:11.255223 27 log.go:181] (0xc000a795e0) (1) Data frame sent\nI0816 23:23:11.255255 27 log.go:181] (0xc00003b290) (0xc000a795e0) Stream removed, broadcasting: 1\nI0816 23:23:11.255305 27 log.go:181] (0xc00003b290) Go away received\nI0816 23:23:11.255663 27 log.go:181] (0xc00003b290) (0xc000a795e0) Stream removed, broadcasting: 1\nI0816 23:23:11.255687 27 log.go:181] (0xc00003b290) (0xc0008b8c80) Stream removed, broadcasting: 3\nI0816 23:23:11.255707 27 log.go:181] (0xc00003b290) (0xc000850a00) Stream removed, broadcasting: 5\n" Aug 16 23:23:11.263: INFO: stdout: "" Aug 16 23:23:11.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2325 execpod-affinityw46qj -- /bin/sh -x -c nc -zv -t -w 2 10.104.29.16 80' Aug 16 23:23:12.510: INFO: stderr: "I0816 23:23:12.451490 45 log.go:181] (0xc000b93080) (0xc0001da1e0) Create stream\nI0816 23:23:12.451533 45 log.go:181] (0xc000b93080) (0xc0001da1e0) Stream added, broadcasting: 1\nI0816 23:23:12.454147 45 log.go:181] (0xc000b93080) Reply frame received for 1\nI0816 23:23:12.454180 45 log.go:181] (0xc000b93080) (0xc0005e63c0) Create stream\nI0816 23:23:12.454195 45 log.go:181] (0xc000b93080) (0xc0005e63c0) Stream added, broadcasting: 3\nI0816 23:23:12.454923 45 log.go:181] (0xc000b93080) Reply frame received for 3\nI0816 23:23:12.454958 45 log.go:181] (0xc000b93080) (0xc00059a0a0) Create stream\nI0816 23:23:12.454968 45 log.go:181] (0xc000b93080) (0xc00059a0a0) Stream added, broadcasting: 5\nI0816 23:23:12.455607 45 log.go:181] (0xc000b93080) Reply frame received for 5\nI0816 23:23:12.504627 45 log.go:181] (0xc000b93080) Data frame received for 5\nI0816 23:23:12.504660 45 log.go:181] (0xc000b93080) Data frame received for 3\nI0816 23:23:12.504687 45 log.go:181] (0xc0005e63c0) (3) Data frame handling\nI0816 23:23:12.504709 45 log.go:181] (0xc00059a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.29.16 80\nConnection to 10.104.29.16 80 port [tcp/http] succeeded!\nI0816 23:23:12.504789 45 log.go:181] (0xc00059a0a0) (5) Data frame sent\nI0816 23:23:12.504804 45 log.go:181] (0xc000b93080) Data frame received for 5\nI0816 23:23:12.504823 45 log.go:181] (0xc00059a0a0) (5) Data frame handling\nI0816 23:23:12.505585 45 log.go:181] (0xc000b93080) Data frame received for 1\nI0816 23:23:12.505606 45 log.go:181] (0xc0001da1e0) (1) Data frame handling\nI0816 23:23:12.505616 45 log.go:181] (0xc0001da1e0) (1) Data frame sent\nI0816 23:23:12.505626 45 log.go:181] (0xc000b93080) (0xc0001da1e0) Stream removed, broadcasting: 1\nI0816 23:23:12.505640 45 log.go:181] (0xc000b93080) Go away received\nI0816 23:23:12.505859 45 log.go:181] (0xc000b93080) (0xc0001da1e0) Stream removed, broadcasting: 1\nI0816 23:23:12.505869 45 log.go:181] (0xc000b93080) (0xc0005e63c0) Stream removed, broadcasting: 3\nI0816 23:23:12.505874 45 log.go:181] (0xc000b93080) (0xc00059a0a0) Stream removed, broadcasting: 5\n" Aug 16 23:23:12.510: INFO: stdout: "" Aug 16 23:23:12.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2325 execpod-affinityw46qj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.29.16:80/ ; done' Aug 16 23:23:13.394: INFO: stderr: "I0816 23:23:13.247574 63 log.go:181] (0xc000654fd0) (0xc000b2ba40) Create stream\nI0816 23:23:13.247651 63 log.go:181] (0xc000654fd0) (0xc000b2ba40) Stream added, broadcasting: 1\nI0816 23:23:13.251519 63 log.go:181] (0xc000654fd0) Reply frame received for 1\nI0816 23:23:13.251545 63 log.go:181] (0xc000654fd0) (0xc0002e0280) Create stream\nI0816 23:23:13.251553 63 log.go:181] (0xc000654fd0) (0xc0002e0280) Stream added, broadcasting: 3\nI0816 23:23:13.252337 63 log.go:181] (0xc000654fd0) Reply frame received for 3\nI0816 23:23:13.252365 63 log.go:181] (0xc000654fd0) (0xc0002ec5a0) Create stream\nI0816 23:23:13.252374 63 log.go:181] (0xc000654fd0) (0xc0002ec5a0) Stream added, broadcasting: 5\nI0816 23:23:13.253127 63 log.go:181] (0xc000654fd0) Reply frame received for 5\nI0816 23:23:13.310812 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.310854 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.310877 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.310892 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.310902 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.310915 63 log.go:181] (0xc0002e0280) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.316294 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.316316 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.316331 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.316719 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.316826 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.316840 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.316859 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.316884 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.316912 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.321691 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.321713 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.321727 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.322035 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.322056 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.322097 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.322116 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.322136 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.322155 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.327584 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.327604 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.327618 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.328081 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.328104 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.328111 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.328124 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.328137 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.328143 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.331652 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.331672 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.331687 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.331925 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.331942 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.331961 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.331998 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.332010 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.332020 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.334920 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.334941 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.334966 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.335305 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.335334 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.335346 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.335357 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.335365 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.335372 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.341051 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.341087 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.341121 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.341408 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.341438 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.341462 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.341487 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.341497 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.341506 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.345168 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.345195 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.345215 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.345594 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.345607 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.345615 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\nI0816 23:23:13.345721 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.345737 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.345745 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.345769 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.345788 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.345798 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.348664 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.348677 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.348700 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.349492 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.349536 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.349572 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.349585 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.349594 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.349616 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.349653 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.349670 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.349691 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.353282 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.353306 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.353338 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.353863 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.353902 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.353937 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -qI0816 23:23:13.353997 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.354008 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.354014 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.354024 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.354030 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.354035 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.357187 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.357218 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.357258 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.357716 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.357734 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.357746 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.357758 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.357767 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.357776 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.361387 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.361399 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.361405 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.361811 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.361833 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.361843 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.361860 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.361867 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.361875 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.365392 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.365417 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.365440 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.366096 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.366106 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.366113 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.366125 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.366137 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.366149 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.369937 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.369960 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.369983 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.370763 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.370778 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.370799 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.370811 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.370817 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.29.16:80/\nI0816 23:23:13.370832 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.371090 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.371103 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.371116 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.375613 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.375629 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.375646 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.376198 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.376211 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.376219 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0816 23:23:13.376313 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.376335 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.376356 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n http://10.104.29.16:80/\nI0816 23:23:13.376443 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.376460 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.376473 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.380613 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.380631 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.380643 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.381645 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.381667 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.381686 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0816 23:23:13.381702 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.381720 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.381790 63 log.go:181] (0xc0002ec5a0) (5) Data frame sent\nI0816 23:23:13.381805 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.381822 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.381833 63 log.go:181] (0xc0002e0280) (3) Data frame sent\n 2 http://10.104.29.16:80/\nI0816 23:23:13.385931 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.385953 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.386038 63 log.go:181] (0xc0002e0280) (3) Data frame sent\nI0816 23:23:13.386821 63 log.go:181] (0xc000654fd0) Data frame received for 5\nI0816 23:23:13.386835 63 log.go:181] (0xc0002ec5a0) (5) Data frame handling\nI0816 23:23:13.387101 63 log.go:181] (0xc000654fd0) Data frame received for 3\nI0816 23:23:13.387114 63 log.go:181] (0xc0002e0280) (3) Data frame handling\nI0816 23:23:13.389104 63 log.go:181] (0xc000654fd0) Data frame received for 1\nI0816 23:23:13.389121 63 log.go:181] (0xc000b2ba40) (1) Data frame handling\nI0816 23:23:13.389135 63 log.go:181] (0xc000b2ba40) (1) Data frame sent\nI0816 23:23:13.389153 63 log.go:181] (0xc000654fd0) (0xc000b2ba40) Stream removed, broadcasting: 1\nI0816 23:23:13.389167 63 log.go:181] (0xc000654fd0) Go away received\nI0816 23:23:13.389541 63 log.go:181] (0xc000654fd0) (0xc000b2ba40) Stream removed, broadcasting: 1\nI0816 23:23:13.389561 63 log.go:181] (0xc000654fd0) (0xc0002e0280) Stream removed, broadcasting: 3\nI0816 23:23:13.389572 63 log.go:181] (0xc000654fd0) (0xc0002ec5a0) Stream removed, broadcasting: 5\n" Aug 16 23:23:13.395: INFO: stdout: "\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt\naffinity-clusterip-66tgt" Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Received response from host: affinity-clusterip-66tgt Aug 16 23:23:13.395: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2325, will wait for the garbage collector to delete the pods Aug 16 23:23:15.084: INFO: Deleting ReplicationController affinity-clusterip took: 302.55567ms Aug 16 23:23:16.084: INFO: Terminating ReplicationController affinity-clusterip pods took: 1.000185256s [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:23:30.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2325" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:38.857 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":8,"skipped":177,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:23:30.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:23:46.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3502" for this suite. • [SLOW TEST:16.571 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":294,"completed":9,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:23:47.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7293 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7293 I0816 23:23:47.886142 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7293, replica count: 2 I0816 23:23:50.936522 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:23:53.936691 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:23:56.936864 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 23:23:56.936: INFO: Creating new exec pod Aug 16 23:24:03.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7293 execpodjzprm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 16 23:24:04.449: INFO: stderr: "I0816 23:24:04.376531 81 log.go:181] (0xc0006d0000) (0xc000a7a320) Create stream\nI0816 23:24:04.376575 81 log.go:181] (0xc0006d0000) (0xc000a7a320) Stream added, broadcasting: 1\nI0816 23:24:04.378059 81 log.go:181] (0xc0006d0000) Reply frame received for 1\nI0816 23:24:04.378096 81 log.go:181] (0xc0006d0000) (0xc000a24460) Create stream\nI0816 23:24:04.378105 81 log.go:181] (0xc0006d0000) (0xc000a24460) Stream added, broadcasting: 3\nI0816 23:24:04.379072 81 log.go:181] (0xc0006d0000) Reply frame received for 3\nI0816 23:24:04.379088 81 log.go:181] (0xc0006d0000) (0xc000a24d20) Create stream\nI0816 23:24:04.379097 81 log.go:181] (0xc0006d0000) (0xc000a24d20) Stream added, broadcasting: 5\nI0816 23:24:04.379653 81 log.go:181] (0xc0006d0000) Reply frame received for 5\nI0816 23:24:04.438173 81 log.go:181] (0xc0006d0000) Data frame received for 5\nI0816 23:24:04.438219 81 log.go:181] (0xc000a24d20) (5) Data frame handling\nI0816 23:24:04.438244 81 log.go:181] (0xc000a24d20) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0816 23:24:04.438537 81 log.go:181] (0xc0006d0000) Data frame received for 5\nI0816 23:24:04.438578 81 log.go:181] (0xc000a24d20) (5) Data frame handling\nI0816 23:24:04.438601 81 log.go:181] (0xc000a24d20) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0816 23:24:04.438680 81 log.go:181] (0xc0006d0000) Data frame received for 3\nI0816 23:24:04.438715 81 log.go:181] (0xc000a24460) (3) Data frame handling\nI0816 23:24:04.438923 81 log.go:181] (0xc0006d0000) Data frame received for 5\nI0816 23:24:04.438939 81 log.go:181] (0xc000a24d20) (5) Data frame handling\nI0816 23:24:04.440420 81 log.go:181] (0xc0006d0000) Data frame received for 1\nI0816 23:24:04.440447 81 log.go:181] (0xc000a7a320) (1) Data frame handling\nI0816 23:24:04.440465 81 log.go:181] (0xc000a7a320) (1) Data frame sent\nI0816 23:24:04.442344 81 log.go:181] (0xc0006d0000) (0xc000a7a320) Stream removed, broadcasting: 1\nI0816 23:24:04.442610 81 log.go:181] (0xc0006d0000) (0xc000a7a320) Stream removed, broadcasting: 1\nI0816 23:24:04.442623 81 log.go:181] (0xc0006d0000) (0xc000a24460) Stream removed, broadcasting: 3\nI0816 23:24:04.442631 81 log.go:181] (0xc0006d0000) (0xc000a24d20) Stream removed, broadcasting: 5\n" Aug 16 23:24:04.449: INFO: stdout: "" Aug 16 23:24:04.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7293 execpodjzprm -- /bin/sh -x -c nc -zv -t -w 2 10.100.196.168 80' Aug 16 23:24:04.694: INFO: stderr: "I0816 23:24:04.626037 99 log.go:181] (0xc000ea11e0) (0xc000971540) Create stream\nI0816 23:24:04.626080 99 log.go:181] (0xc000ea11e0) (0xc000971540) Stream added, broadcasting: 1\nI0816 23:24:04.629314 99 log.go:181] (0xc000ea11e0) Reply frame received for 1\nI0816 23:24:04.629345 99 log.go:181] (0xc000ea11e0) (0xc0003025a0) Create stream\nI0816 23:24:04.629356 99 log.go:181] (0xc000ea11e0) (0xc0003025a0) Stream added, broadcasting: 3\nI0816 23:24:04.630263 99 log.go:181] (0xc000ea11e0) Reply frame received for 3\nI0816 23:24:04.630299 99 log.go:181] (0xc000ea11e0) (0xc0008b6640) Create stream\nI0816 23:24:04.630313 99 log.go:181] (0xc000ea11e0) (0xc0008b6640) Stream added, broadcasting: 5\nI0816 23:24:04.631306 99 log.go:181] (0xc000ea11e0) Reply frame received for 5\nI0816 23:24:04.688300 99 log.go:181] (0xc000ea11e0) Data frame received for 3\nI0816 23:24:04.688325 99 log.go:181] (0xc0003025a0) (3) Data frame handling\nI0816 23:24:04.688375 99 log.go:181] (0xc000ea11e0) Data frame received for 5\nI0816 23:24:04.688421 99 log.go:181] (0xc0008b6640) (5) Data frame handling\nI0816 23:24:04.688460 99 log.go:181] (0xc0008b6640) (5) Data frame sent\nI0816 23:24:04.688484 99 log.go:181] (0xc000ea11e0) Data frame received for 5\nI0816 23:24:04.688496 99 log.go:181] (0xc0008b6640) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.196.168 80\nConnection to 10.100.196.168 80 port [tcp/http] succeeded!\nI0816 23:24:04.689617 99 log.go:181] (0xc000ea11e0) Data frame received for 1\nI0816 23:24:04.689631 99 log.go:181] (0xc000971540) (1) Data frame handling\nI0816 23:24:04.689640 99 log.go:181] (0xc000971540) (1) Data frame sent\nI0816 23:24:04.689650 99 log.go:181] (0xc000ea11e0) (0xc000971540) Stream removed, broadcasting: 1\nI0816 23:24:04.689760 99 log.go:181] (0xc000ea11e0) Go away received\nI0816 23:24:04.689917 99 log.go:181] (0xc000ea11e0) (0xc000971540) Stream removed, broadcasting: 1\nI0816 23:24:04.689929 99 log.go:181] (0xc000ea11e0) (0xc0003025a0) Stream removed, broadcasting: 3\nI0816 23:24:04.689936 99 log.go:181] (0xc000ea11e0) (0xc0008b6640) Stream removed, broadcasting: 5\n" Aug 16 23:24:04.694: INFO: stdout: "" Aug 16 23:24:04.694: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:24:04.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7293" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:17.922 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":294,"completed":10,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:24:04.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 16 23:24:24.027: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 16 23:24:24.105: INFO: Pod pod-with-poststart-http-hook still exists Aug 16 23:24:26.105: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 16 23:24:26.205: INFO: Pod pod-with-poststart-http-hook still exists Aug 16 23:24:28.105: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 16 23:24:28.109: INFO: Pod pod-with-poststart-http-hook still exists Aug 16 23:24:30.105: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 16 23:24:30.276: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:24:30.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3022" for this suite. • [SLOW TEST:25.331 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":294,"completed":11,"skipped":247,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:24:30.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4b744bcb-6d1f-4084-b090-85923dcaea7c STEP: Creating a pod to test consume secrets Aug 16 23:24:30.651: INFO: Waiting up to 5m0s for pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec" in namespace "secrets-7365" to be "Succeeded or Failed" Aug 16 23:24:30.684: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.651601ms Aug 16 23:24:32.780: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12859111s Aug 16 23:24:35.151: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500203417s Aug 16 23:24:37.334: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.682884498s Aug 16 23:24:39.459: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807568425s Aug 16 23:24:41.531: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880140335s STEP: Saw pod success Aug 16 23:24:41.531: INFO: Pod "pod-secrets-78587a02-5c49-465e-baff-e388f25148ec" satisfied condition "Succeeded or Failed" Aug 16 23:24:41.540: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-78587a02-5c49-465e-baff-e388f25148ec container secret-volume-test: STEP: delete the pod Aug 16 23:24:43.018: INFO: Waiting for pod pod-secrets-78587a02-5c49-465e-baff-e388f25148ec to disappear Aug 16 23:24:43.039: INFO: Pod pod-secrets-78587a02-5c49-465e-baff-e388f25148ec no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:24:43.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7365" for this suite. • [SLOW TEST:12.861 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":12,"skipped":251,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:24:43.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:24:51.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5276" for this suite. • [SLOW TEST:8.734 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":13,"skipped":253,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:24:51.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-91f5b6f9-94ce-4c8b-affc-c11dfebee34f STEP: Creating secret with name secret-projected-all-test-volume-8e54e360-57fc-452d-bb21-928a1c610fb6 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 16 23:24:52.342: INFO: Waiting up to 5m0s for pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15" in namespace "projected-8002" to be "Succeeded or Failed" Aug 16 23:24:52.410: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15": Phase="Pending", Reason="", readiness=false. Elapsed: 68.323756ms Aug 16 23:24:54.421: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078557163s Aug 16 23:24:56.504: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161915352s Aug 16 23:24:58.511: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15": Phase="Running", Reason="", readiness=true. Elapsed: 6.16934834s Aug 16 23:25:00.519: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176834262s STEP: Saw pod success Aug 16 23:25:00.519: INFO: Pod "projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15" satisfied condition "Succeeded or Failed" Aug 16 23:25:00.521: INFO: Trying to get logs from node latest-worker pod projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15 container projected-all-volume-test: STEP: delete the pod Aug 16 23:25:00.555: INFO: Waiting for pod projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15 to disappear Aug 16 23:25:00.582: INFO: Pod projected-volume-4081de1b-2ced-4920-8d12-5ab956a5ef15 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:25:00.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8002" for this suite. • [SLOW TEST:8.710 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":294,"completed":14,"skipped":258,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:25:00.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410 STEP: creating an pod Aug 16 23:25:00.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 --namespace=kubectl-7352 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 16 23:25:00.790: INFO: stderr: "" Aug 16 23:25:00.790: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 16 23:25:00.790: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 16 23:25:00.790: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7352" to be "running and ready, or succeeded" Aug 16 23:25:00.823: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 32.507317ms Aug 16 23:25:02.851: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060848399s Aug 16 23:25:04.854: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.06346882s Aug 16 23:25:04.854: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 16 23:25:04.854: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 16 23:25:04.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352' Aug 16 23:25:04.958: INFO: stderr: "" Aug 16 23:25:04.958: INFO: stdout: "I0816 23:25:03.975554 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/cmw 420\nI0816 23:25:04.175681 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/sz7q 387\nI0816 23:25:04.375654 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/49q 233\nI0816 23:25:04.575672 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/st27 506\nI0816 23:25:04.775644 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/v4x2 533\n" STEP: limiting log lines Aug 16 23:25:04.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352 --tail=1' Aug 16 23:25:05.136: INFO: stderr: "" Aug 16 23:25:05.136: INFO: stdout: "I0816 23:25:04.975655 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/m2x 514\n" Aug 16 23:25:05.136: INFO: got output "I0816 23:25:04.975655 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/m2x 514\n" STEP: limiting log bytes Aug 16 23:25:05.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352 --limit-bytes=1' Aug 16 23:25:05.572: INFO: stderr: "" Aug 16 23:25:05.572: INFO: stdout: "I" Aug 16 23:25:05.572: INFO: got output "I" STEP: exposing timestamps Aug 16 23:25:05.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352 --tail=1 --timestamps' Aug 16 23:25:05.734: INFO: stderr: "" Aug 16 23:25:05.734: INFO: stdout: "2020-08-16T23:25:05.575813224Z I0816 23:25:05.575704 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/s5n 251\n" Aug 16 23:25:05.734: INFO: got output "2020-08-16T23:25:05.575813224Z I0816 23:25:05.575704 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/s5n 251\n" STEP: restricting to a time range Aug 16 23:25:08.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352 --since=1s' Aug 16 23:25:08.346: INFO: stderr: "" Aug 16 23:25:08.346: INFO: stdout: "I0816 23:25:07.375727 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/zc8 479\nI0816 23:25:07.575666 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/67g 470\nI0816 23:25:07.775750 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/dn4 527\nI0816 23:25:07.975670 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/l8tf 496\nI0816 23:25:08.175787 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/gjq 416\n" Aug 16 23:25:08.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7352 --since=24h' Aug 16 23:25:08.498: INFO: stderr: "" Aug 16 23:25:08.498: INFO: stdout: "I0816 23:25:03.975554 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/cmw 420\nI0816 23:25:04.175681 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/sz7q 387\nI0816 23:25:04.375654 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/49q 233\nI0816 23:25:04.575672 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/st27 506\nI0816 23:25:04.775644 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/v4x2 533\nI0816 23:25:04.975655 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/m2x 514\nI0816 23:25:05.175709 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/tct 485\nI0816 23:25:05.375716 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/z85 449\nI0816 23:25:05.575704 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/s5n 251\nI0816 23:25:05.775684 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/csp 558\nI0816 23:25:05.975734 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/74p 264\nI0816 23:25:06.175706 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/p9k5 358\nI0816 23:25:06.375652 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/ns8 235\nI0816 23:25:06.575686 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/477r 339\nI0816 23:25:06.775691 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/qzb7 443\nI0816 23:25:06.975654 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/nvbv 278\nI0816 23:25:07.175683 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/n8d 465\nI0816 23:25:07.375727 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/zc8 479\nI0816 23:25:07.575666 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/67g 470\nI0816 23:25:07.775750 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/dn4 527\nI0816 23:25:07.975670 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/l8tf 496\nI0816 23:25:08.175787 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/gjq 416\nI0816 23:25:08.375726 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/zjf8 243\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Aug 16 23:25:08.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7352' Aug 16 23:25:21.221: INFO: stderr: "" Aug 16 23:25:21.221: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:25:21.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7352" for this suite. • [SLOW TEST:21.133 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1406 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":294,"completed":15,"skipped":268,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:25:21.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2132 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-2132 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2132 Aug 16 23:25:22.713: INFO: Found 0 stateful pods, waiting for 1 Aug 16 23:25:32.717: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Aug 16 23:25:42.747: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 16 23:25:42.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:25:43.491: INFO: stderr: "I0816 23:25:43.360972 257 log.go:181] (0xc0006a9d90) (0xc000c195e0) Create stream\nI0816 23:25:43.361028 257 log.go:181] (0xc0006a9d90) (0xc000c195e0) Stream added, broadcasting: 1\nI0816 23:25:43.362792 257 log.go:181] (0xc0006a9d90) Reply frame received for 1\nI0816 23:25:43.362829 257 log.go:181] (0xc0006a9d90) (0xc000892f00) Create stream\nI0816 23:25:43.362847 257 log.go:181] (0xc0006a9d90) (0xc000892f00) Stream added, broadcasting: 3\nI0816 23:25:43.363503 257 log.go:181] (0xc0006a9d90) Reply frame received for 3\nI0816 23:25:43.363531 257 log.go:181] (0xc0006a9d90) (0xc0009dec80) Create stream\nI0816 23:25:43.363539 257 log.go:181] (0xc0006a9d90) (0xc0009dec80) Stream added, broadcasting: 5\nI0816 23:25:43.364187 257 log.go:181] (0xc0006a9d90) Reply frame received for 5\nI0816 23:25:43.427671 257 log.go:181] (0xc0006a9d90) Data frame received for 5\nI0816 23:25:43.427694 257 log.go:181] (0xc0009dec80) (5) Data frame handling\nI0816 23:25:43.427713 257 log.go:181] (0xc0009dec80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:25:43.482723 257 log.go:181] (0xc0006a9d90) Data frame received for 5\nI0816 23:25:43.482759 257 log.go:181] (0xc0009dec80) (5) Data frame handling\nI0816 23:25:43.482790 257 log.go:181] (0xc0006a9d90) Data frame received for 3\nI0816 23:25:43.482805 257 log.go:181] (0xc000892f00) (3) Data frame handling\nI0816 23:25:43.482819 257 log.go:181] (0xc000892f00) (3) Data frame sent\nI0816 23:25:43.482831 257 log.go:181] (0xc0006a9d90) Data frame received for 3\nI0816 23:25:43.482841 257 log.go:181] (0xc000892f00) (3) Data frame handling\nI0816 23:25:43.483928 257 log.go:181] (0xc0006a9d90) Data frame received for 1\nI0816 23:25:43.483953 257 log.go:181] (0xc000c195e0) (1) Data frame handling\nI0816 23:25:43.483971 257 log.go:181] (0xc000c195e0) (1) Data frame sent\nI0816 23:25:43.483985 257 log.go:181] (0xc0006a9d90) (0xc000c195e0) Stream removed, broadcasting: 1\nI0816 23:25:43.484001 257 log.go:181] (0xc0006a9d90) Go away received\nI0816 23:25:43.484283 257 log.go:181] (0xc0006a9d90) (0xc000c195e0) Stream removed, broadcasting: 1\nI0816 23:25:43.484293 257 log.go:181] (0xc0006a9d90) (0xc000892f00) Stream removed, broadcasting: 3\nI0816 23:25:43.484298 257 log.go:181] (0xc0006a9d90) (0xc0009dec80) Stream removed, broadcasting: 5\n" Aug 16 23:25:43.492: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:25:43.492: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:25:43.512: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 16 23:25:53.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:25:53.514: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:25:53.911: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:25:53.911: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:25:53.912: INFO: Aug 16 23:25:53.912: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 16 23:25:55.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.606697094s Aug 16 23:25:56.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.481352363s Aug 16 23:25:57.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.09815463s Aug 16 23:25:59.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935390197s Aug 16 23:26:00.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.333197914s Aug 16 23:26:01.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.321683306s Aug 16 23:26:02.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 967.312957ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2132 Aug 16 23:26:05.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:26:07.155: INFO: stderr: "I0816 23:26:07.006957 275 log.go:181] (0xc00003a0b0) (0xc00099c000) Create stream\nI0816 23:26:07.007004 275 log.go:181] (0xc00003a0b0) (0xc00099c000) Stream added, broadcasting: 1\nI0816 23:26:07.008444 275 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0816 23:26:07.008472 275 log.go:181] (0xc00003a0b0) (0xc000998640) Create stream\nI0816 23:26:07.008482 275 log.go:181] (0xc00003a0b0) (0xc000998640) Stream added, broadcasting: 3\nI0816 23:26:07.009374 275 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0816 23:26:07.009418 275 log.go:181] (0xc00003a0b0) (0xc0007dad20) Create stream\nI0816 23:26:07.009448 275 log.go:181] (0xc00003a0b0) (0xc0007dad20) Stream added, broadcasting: 5\nI0816 23:26:07.010215 275 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0816 23:26:07.076379 275 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0816 23:26:07.076408 275 log.go:181] (0xc0007dad20) (5) Data frame handling\nI0816 23:26:07.076428 275 log.go:181] (0xc0007dad20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 23:26:07.145870 275 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0816 23:26:07.145924 275 log.go:181] (0xc000998640) (3) Data frame handling\nI0816 23:26:07.145953 275 log.go:181] (0xc000998640) (3) Data frame sent\nI0816 23:26:07.145979 275 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0816 23:26:07.145998 275 log.go:181] (0xc000998640) (3) Data frame handling\nI0816 23:26:07.146135 275 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0816 23:26:07.146155 275 log.go:181] (0xc0007dad20) (5) Data frame handling\nI0816 23:26:07.149066 275 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0816 23:26:07.149088 275 log.go:181] (0xc00099c000) (1) Data frame handling\nI0816 23:26:07.149106 275 log.go:181] (0xc00099c000) (1) Data frame sent\nI0816 23:26:07.149221 275 log.go:181] (0xc00003a0b0) (0xc00099c000) Stream removed, broadcasting: 1\nI0816 23:26:07.149245 275 log.go:181] (0xc00003a0b0) Go away received\nI0816 23:26:07.149766 275 log.go:181] (0xc00003a0b0) (0xc00099c000) Stream removed, broadcasting: 1\nI0816 23:26:07.149795 275 log.go:181] (0xc00003a0b0) (0xc000998640) Stream removed, broadcasting: 3\nI0816 23:26:07.149814 275 log.go:181] (0xc00003a0b0) (0xc0007dad20) Stream removed, broadcasting: 5\n" Aug 16 23:26:07.155: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:26:07.155: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:26:07.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:26:08.432: INFO: stderr: "I0816 23:26:08.359473 293 log.go:181] (0xc000140370) (0xc000a06aa0) Create stream\nI0816 23:26:08.359559 293 log.go:181] (0xc000140370) (0xc000a06aa0) Stream added, broadcasting: 1\nI0816 23:26:08.361111 293 log.go:181] (0xc000140370) Reply frame received for 1\nI0816 23:26:08.361136 293 log.go:181] (0xc000140370) (0xc0009f6be0) Create stream\nI0816 23:26:08.361145 293 log.go:181] (0xc000140370) (0xc0009f6be0) Stream added, broadcasting: 3\nI0816 23:26:08.361717 293 log.go:181] (0xc000140370) Reply frame received for 3\nI0816 23:26:08.361740 293 log.go:181] (0xc000140370) (0xc000a534a0) Create stream\nI0816 23:26:08.361748 293 log.go:181] (0xc000140370) (0xc000a534a0) Stream added, broadcasting: 5\nI0816 23:26:08.362459 293 log.go:181] (0xc000140370) Reply frame received for 5\nI0816 23:26:08.421250 293 log.go:181] (0xc000140370) Data frame received for 5\nI0816 23:26:08.421278 293 log.go:181] (0xc000a534a0) (5) Data frame handling\nI0816 23:26:08.421286 293 log.go:181] (0xc000a534a0) (5) Data frame sent\nI0816 23:26:08.421291 293 log.go:181] (0xc000140370) Data frame received for 5\nI0816 23:26:08.421295 293 log.go:181] (0xc000a534a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0816 23:26:08.421319 293 log.go:181] (0xc000140370) Data frame received for 3\nI0816 23:26:08.421335 293 log.go:181] (0xc0009f6be0) (3) Data frame handling\nI0816 23:26:08.421350 293 log.go:181] (0xc0009f6be0) (3) Data frame sent\nI0816 23:26:08.421365 293 log.go:181] (0xc000140370) Data frame received for 3\nI0816 23:26:08.421374 293 log.go:181] (0xc0009f6be0) (3) Data frame handling\nI0816 23:26:08.422439 293 log.go:181] (0xc000140370) Data frame received for 1\nI0816 23:26:08.422448 293 log.go:181] (0xc000a06aa0) (1) Data frame handling\nI0816 23:26:08.422462 293 log.go:181] (0xc000a06aa0) (1) Data frame sent\nI0816 23:26:08.422624 293 log.go:181] (0xc000140370) (0xc000a06aa0) Stream removed, broadcasting: 1\nI0816 23:26:08.422643 293 log.go:181] (0xc000140370) Go away received\nI0816 23:26:08.422947 293 log.go:181] (0xc000140370) (0xc000a06aa0) Stream removed, broadcasting: 1\nI0816 23:26:08.422964 293 log.go:181] (0xc000140370) (0xc0009f6be0) Stream removed, broadcasting: 3\nI0816 23:26:08.422970 293 log.go:181] (0xc000140370) (0xc000a534a0) Stream removed, broadcasting: 5\n" Aug 16 23:26:08.432: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:26:08.432: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:26:08.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:26:09.034: INFO: stderr: "I0816 23:26:08.960694 312 log.go:181] (0xc0008c2000) (0xc000448280) Create stream\nI0816 23:26:08.960869 312 log.go:181] (0xc0008c2000) (0xc000448280) Stream added, broadcasting: 1\nI0816 23:26:08.962744 312 log.go:181] (0xc0008c2000) Reply frame received for 1\nI0816 23:26:08.962774 312 log.go:181] (0xc0008c2000) (0xc000928960) Create stream\nI0816 23:26:08.962782 312 log.go:181] (0xc0008c2000) (0xc000928960) Stream added, broadcasting: 3\nI0816 23:26:08.963584 312 log.go:181] (0xc0008c2000) Reply frame received for 3\nI0816 23:26:08.963625 312 log.go:181] (0xc0008c2000) (0xc000928be0) Create stream\nI0816 23:26:08.963651 312 log.go:181] (0xc0008c2000) (0xc000928be0) Stream added, broadcasting: 5\nI0816 23:26:08.964607 312 log.go:181] (0xc0008c2000) Reply frame received for 5\nI0816 23:26:09.025729 312 log.go:181] (0xc0008c2000) Data frame received for 3\nI0816 23:26:09.025774 312 log.go:181] (0xc000928960) (3) Data frame handling\nI0816 23:26:09.025792 312 log.go:181] (0xc000928960) (3) Data frame sent\nI0816 23:26:09.025943 312 log.go:181] (0xc0008c2000) Data frame received for 5\nI0816 23:26:09.025971 312 log.go:181] (0xc000928be0) (5) Data frame handling\nI0816 23:26:09.025984 312 log.go:181] (0xc000928be0) (5) Data frame sent\nI0816 23:26:09.025994 312 log.go:181] (0xc0008c2000) Data frame received for 5\nI0816 23:26:09.026003 312 log.go:181] (0xc000928be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0816 23:26:09.026130 312 log.go:181] (0xc0008c2000) Data frame received for 3\nI0816 23:26:09.026155 312 log.go:181] (0xc000928960) (3) Data frame handling\nI0816 23:26:09.028276 312 log.go:181] (0xc0008c2000) Data frame received for 1\nI0816 23:26:09.028294 312 log.go:181] (0xc000448280) (1) Data frame handling\nI0816 23:26:09.028303 312 log.go:181] (0xc000448280) (1) Data frame sent\nI0816 23:26:09.028312 312 log.go:181] (0xc0008c2000) (0xc000448280) Stream removed, broadcasting: 1\nI0816 23:26:09.028322 312 log.go:181] (0xc0008c2000) Go away received\nI0816 23:26:09.028823 312 log.go:181] (0xc0008c2000) (0xc000448280) Stream removed, broadcasting: 1\nI0816 23:26:09.028857 312 log.go:181] (0xc0008c2000) (0xc000928960) Stream removed, broadcasting: 3\nI0816 23:26:09.028873 312 log.go:181] (0xc0008c2000) (0xc000928be0) Stream removed, broadcasting: 5\n" Aug 16 23:26:09.034: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:26:09.034: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:26:09.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 23:26:09.186: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 23:26:09.186: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 16 23:26:09.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:26:09.926: INFO: stderr: "I0816 23:26:09.835108 330 log.go:181] (0xc0005a5080) (0xc0009a23c0) Create stream\nI0816 23:26:09.835167 330 log.go:181] (0xc0005a5080) (0xc0009a23c0) Stream added, broadcasting: 1\nI0816 23:26:09.840323 330 log.go:181] (0xc0005a5080) Reply frame received for 1\nI0816 23:26:09.840361 330 log.go:181] (0xc0005a5080) (0xc000b9b180) Create stream\nI0816 23:26:09.840371 330 log.go:181] (0xc0005a5080) (0xc000b9b180) Stream added, broadcasting: 3\nI0816 23:26:09.841406 330 log.go:181] (0xc0005a5080) Reply frame received for 3\nI0816 23:26:09.841444 330 log.go:181] (0xc0005a5080) (0xc000b94460) Create stream\nI0816 23:26:09.841453 330 log.go:181] (0xc0005a5080) (0xc000b94460) Stream added, broadcasting: 5\nI0816 23:26:09.842211 330 log.go:181] (0xc0005a5080) Reply frame received for 5\nI0816 23:26:09.913829 330 log.go:181] (0xc0005a5080) Data frame received for 3\nI0816 23:26:09.913976 330 log.go:181] (0xc000b9b180) (3) Data frame handling\nI0816 23:26:09.914066 330 log.go:181] (0xc000b9b180) (3) Data frame sent\nI0816 23:26:09.915406 330 log.go:181] (0xc0005a5080) Data frame received for 5\nI0816 23:26:09.915429 330 log.go:181] (0xc000b94460) (5) Data frame handling\nI0816 23:26:09.915451 330 log.go:181] (0xc000b94460) (5) Data frame sent\nI0816 23:26:09.915461 330 log.go:181] (0xc0005a5080) Data frame received for 5\nI0816 23:26:09.915469 330 log.go:181] (0xc000b94460) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:26:09.916277 330 log.go:181] (0xc0005a5080) Data frame received for 3\nI0816 23:26:09.916294 330 log.go:181] (0xc000b9b180) (3) Data frame handling\nI0816 23:26:09.918763 330 log.go:181] (0xc0005a5080) Data frame received for 1\nI0816 23:26:09.918839 330 log.go:181] (0xc0009a23c0) (1) Data frame handling\nI0816 23:26:09.918868 330 log.go:181] (0xc0009a23c0) (1) Data frame sent\nI0816 23:26:09.918911 330 log.go:181] (0xc0005a5080) (0xc0009a23c0) Stream removed, broadcasting: 1\nI0816 23:26:09.918952 330 log.go:181] (0xc0005a5080) Go away received\nI0816 23:26:09.919447 330 log.go:181] (0xc0005a5080) (0xc0009a23c0) Stream removed, broadcasting: 1\nI0816 23:26:09.919477 330 log.go:181] (0xc0005a5080) (0xc000b9b180) Stream removed, broadcasting: 3\nI0816 23:26:09.919494 330 log.go:181] (0xc0005a5080) (0xc000b94460) Stream removed, broadcasting: 5\n" Aug 16 23:26:09.926: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:26:09.926: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:26:09.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:26:10.723: INFO: stderr: "I0816 23:26:10.095512 348 log.go:181] (0xc0006ae370) (0xc000209720) Create stream\nI0816 23:26:10.095571 348 log.go:181] (0xc0006ae370) (0xc000209720) Stream added, broadcasting: 1\nI0816 23:26:10.097604 348 log.go:181] (0xc0006ae370) Reply frame received for 1\nI0816 23:26:10.097654 348 log.go:181] (0xc0006ae370) (0xc0008014a0) Create stream\nI0816 23:26:10.097669 348 log.go:181] (0xc0006ae370) (0xc0008014a0) Stream added, broadcasting: 3\nI0816 23:26:10.098683 348 log.go:181] (0xc0006ae370) Reply frame received for 3\nI0816 23:26:10.098713 348 log.go:181] (0xc0006ae370) (0xc000801900) Create stream\nI0816 23:26:10.098721 348 log.go:181] (0xc0006ae370) (0xc000801900) Stream added, broadcasting: 5\nI0816 23:26:10.099563 348 log.go:181] (0xc0006ae370) Reply frame received for 5\nI0816 23:26:10.162635 348 log.go:181] (0xc0006ae370) Data frame received for 5\nI0816 23:26:10.162655 348 log.go:181] (0xc000801900) (5) Data frame handling\nI0816 23:26:10.162665 348 log.go:181] (0xc000801900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:26:10.711019 348 log.go:181] (0xc0006ae370) Data frame received for 3\nI0816 23:26:10.711052 348 log.go:181] (0xc0008014a0) (3) Data frame handling\nI0816 23:26:10.711062 348 log.go:181] (0xc0008014a0) (3) Data frame sent\nI0816 23:26:10.711069 348 log.go:181] (0xc0006ae370) Data frame received for 3\nI0816 23:26:10.711077 348 log.go:181] (0xc0008014a0) (3) Data frame handling\nI0816 23:26:10.711094 348 log.go:181] (0xc0006ae370) Data frame received for 5\nI0816 23:26:10.711104 348 log.go:181] (0xc000801900) (5) Data frame handling\nI0816 23:26:10.713176 348 log.go:181] (0xc0006ae370) Data frame received for 1\nI0816 23:26:10.713274 348 log.go:181] (0xc000209720) (1) Data frame handling\nI0816 23:26:10.713343 348 log.go:181] (0xc000209720) (1) Data frame sent\nI0816 23:26:10.713364 348 log.go:181] (0xc0006ae370) (0xc000209720) Stream removed, broadcasting: 1\nI0816 23:26:10.713378 348 log.go:181] (0xc0006ae370) Go away received\nI0816 23:26:10.713771 348 log.go:181] (0xc0006ae370) (0xc000209720) Stream removed, broadcasting: 1\nI0816 23:26:10.713803 348 log.go:181] (0xc0006ae370) (0xc0008014a0) Stream removed, broadcasting: 3\nI0816 23:26:10.713813 348 log.go:181] (0xc0006ae370) (0xc000801900) Stream removed, broadcasting: 5\n" Aug 16 23:26:10.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:26:10.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:26:10.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2132 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:26:11.146: INFO: stderr: "I0816 23:26:10.964143 365 log.go:181] (0xc00003a0b0) (0xc00079fa40) Create stream\nI0816 23:26:10.964199 365 log.go:181] (0xc00003a0b0) (0xc00079fa40) Stream added, broadcasting: 1\nI0816 23:26:10.965951 365 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0816 23:26:10.965987 365 log.go:181] (0xc00003a0b0) (0xc0007421e0) Create stream\nI0816 23:26:10.965996 365 log.go:181] (0xc00003a0b0) (0xc0007421e0) Stream added, broadcasting: 3\nI0816 23:26:10.966649 365 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0816 23:26:10.966679 365 log.go:181] (0xc00003a0b0) (0xc0007426e0) Create stream\nI0816 23:26:10.966688 365 log.go:181] (0xc00003a0b0) (0xc0007426e0) Stream added, broadcasting: 5\nI0816 23:26:10.967262 365 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0816 23:26:11.028068 365 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0816 23:26:11.028108 365 log.go:181] (0xc0007426e0) (5) Data frame handling\nI0816 23:26:11.028131 365 log.go:181] (0xc0007426e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:26:11.130642 365 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0816 23:26:11.130671 365 log.go:181] (0xc0007421e0) (3) Data frame handling\nI0816 23:26:11.130679 365 log.go:181] (0xc0007421e0) (3) Data frame sent\nI0816 23:26:11.131444 365 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0816 23:26:11.131473 365 log.go:181] (0xc0007426e0) (5) Data frame handling\nI0816 23:26:11.131510 365 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0816 23:26:11.131523 365 log.go:181] (0xc0007421e0) (3) Data frame handling\nI0816 23:26:11.137172 365 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0816 23:26:11.137259 365 log.go:181] (0xc00079fa40) (1) Data frame handling\nI0816 23:26:11.137302 365 log.go:181] (0xc00079fa40) (1) Data frame sent\nI0816 23:26:11.137353 365 log.go:181] (0xc00003a0b0) (0xc00079fa40) Stream removed, broadcasting: 1\nI0816 23:26:11.137403 365 log.go:181] (0xc00003a0b0) Go away received\nI0816 23:26:11.137780 365 log.go:181] (0xc00003a0b0) (0xc00079fa40) Stream removed, broadcasting: 1\nI0816 23:26:11.137837 365 log.go:181] (0xc00003a0b0) (0xc0007421e0) Stream removed, broadcasting: 3\nI0816 23:26:11.137854 365 log.go:181] (0xc00003a0b0) (0xc0007426e0) Stream removed, broadcasting: 5\n" Aug 16 23:26:11.146: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:26:11.146: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:26:11.146: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:26:11.160: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 16 23:26:21.250: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:26:21.250: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:26:21.250: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:26:21.324: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:21.324: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:21.324: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:21.324: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:21.324: INFO: Aug 16 23:26:21.324: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 23:26:22.632: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:22.632: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:22.632: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:22.632: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:22.632: INFO: Aug 16 23:26:22.632: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 23:26:23.998: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:23.998: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:23.998: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:23.998: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:23.998: INFO: Aug 16 23:26:23.998: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 23:26:25.113: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:25.113: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:25.113: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:25.113: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:25.113: INFO: Aug 16 23:26:25.113: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 23:26:26.186: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:26.186: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:26.186: INFO: ss-1 latest-worker2 Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:26.186: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:26.186: INFO: Aug 16 23:26:26.186: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 23:26:27.188: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:27.188: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:27.188: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:27.188: INFO: Aug 16 23:26:27.188: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 16 23:26:28.285: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:28.285: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:28.285: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:28.285: INFO: Aug 16 23:26:28.285: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 16 23:26:29.288: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:29.288: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:29.289: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:29.289: INFO: Aug 16 23:26:29.289: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 16 23:26:30.589: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 23:26:30.590: INFO: ss-0 latest-worker Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:22 +0000 UTC }] Aug 16 23:26:30.590: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:26:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 23:25:54 +0000 UTC }] Aug 16 23:26:30.590: INFO: Aug 16 23:26:30.590: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2132 Aug 16 23:26:31.668: INFO: Scaling statefulset ss to 0 Aug 16 23:26:31.678: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 16 23:26:31.679: INFO: Deleting all statefulset in ns statefulset-2132 Aug 16 23:26:31.681: INFO: Scaling statefulset ss to 0 Aug 16 23:26:31.687: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:26:31.689: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:26:31.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2132" for this suite. • [SLOW TEST:70.028 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":294,"completed":16,"skipped":271,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:26:31.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 16 23:26:32.934: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 16 23:26:38.093: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:26:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7335" for this suite. • [SLOW TEST:6.794 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":294,"completed":17,"skipped":287,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:26:38.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 16 23:26:38.975: INFO: Waiting up to 5m0s for pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc" in namespace "emptydir-4981" to be "Succeeded or Failed" Aug 16 23:26:38.978: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972088ms Aug 16 23:26:42.016: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040797139s Aug 16 23:26:44.087: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.11259417s Aug 16 23:26:46.308: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.333057746s Aug 16 23:26:49.093: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118032033s STEP: Saw pod success Aug 16 23:26:49.093: INFO: Pod "pod-c4731149-1b49-4e5b-bb75-e47a181aafbc" satisfied condition "Succeeded or Failed" Aug 16 23:26:49.095: INFO: Trying to get logs from node latest-worker2 pod pod-c4731149-1b49-4e5b-bb75-e47a181aafbc container test-container: STEP: delete the pod Aug 16 23:26:49.738: INFO: Waiting for pod pod-c4731149-1b49-4e5b-bb75-e47a181aafbc to disappear Aug 16 23:26:50.117: INFO: Pod pod-c4731149-1b49-4e5b-bb75-e47a181aafbc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:26:50.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4981" for this suite. • [SLOW TEST:11.790 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":18,"skipped":292,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:26:50.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 16 23:26:51.313: INFO: Waiting up to 5m0s for pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c" in namespace "containers-2639" to be "Succeeded or Failed" Aug 16 23:26:51.482: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c": Phase="Pending", Reason="", readiness=false. Elapsed: 169.491891ms Aug 16 23:26:53.776: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462812553s Aug 16 23:26:55.812: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498871642s Aug 16 23:26:58.350: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.037575382s Aug 16 23:27:00.519: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.206649354s STEP: Saw pod success Aug 16 23:27:00.520: INFO: Pod "client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c" satisfied condition "Succeeded or Failed" Aug 16 23:27:00.564: INFO: Trying to get logs from node latest-worker2 pod client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c container test-container: STEP: delete the pod Aug 16 23:27:01.488: INFO: Waiting for pod client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c to disappear Aug 16 23:27:02.053: INFO: Pod client-containers-61f9f5a5-30c3-4873-b2f0-8e1d5f9c997c no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:27:02.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2639" for this suite. • [SLOW TEST:12.527 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":294,"completed":19,"skipped":307,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:27:02.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 16 23:27:04.814: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 16 23:27:07.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217225, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:27:09.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217225, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:27:11.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217225, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:27:13.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217225, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217224, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:27:16.068: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:27:16.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:27:25.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8375" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:22.941 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":294,"completed":20,"skipped":319,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:27:25.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:27:34.559: INFO: Waiting up to 5m0s for pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71" in namespace "pods-2530" to be "Succeeded or Failed" Aug 16 23:27:34.619: INFO: Pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 60.223019ms Aug 16 23:27:36.687: INFO: Pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12740805s Aug 16 23:27:38.988: INFO: Pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429300717s Aug 16 23:27:41.287: INFO: Pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.727837752s STEP: Saw pod success Aug 16 23:27:41.287: INFO: Pod "client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71" satisfied condition "Succeeded or Failed" Aug 16 23:27:41.740: INFO: Trying to get logs from node latest-worker2 pod client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71 container env3cont: STEP: delete the pod Aug 16 23:27:42.622: INFO: Waiting for pod client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71 to disappear Aug 16 23:27:42.667: INFO: Pod client-envvars-fb0715d1-56c0-432b-94b3-3c4318c3fd71 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:27:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2530" for this suite. • [SLOW TEST:16.869 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":294,"completed":21,"skipped":331,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:27:42.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 16 23:27:42.969: INFO: Waiting up to 1m0s for all nodes to be ready Aug 16 23:28:43.570: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 16 23:28:43.922: INFO: Created pod: pod0-sched-preemption-low-priority Aug 16 23:28:43.978: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:29:18.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-436" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:96.953 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":294,"completed":22,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:29:19.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:29:20.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9598" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":294,"completed":23,"skipped":353,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:29:20.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:29:20.221: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:29:22.225: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:29:24.400: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:29:26.442: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:29:28.269: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:30.228: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:32.225: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:34.226: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:36.225: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:38.366: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:40.226: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:42.225: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = false) Aug 16 23:29:44.225: INFO: The status of Pod test-webserver-239fdbcf-4a6c-4d64-84f6-d2365ecd4f54 is Running (Ready = true) Aug 16 23:29:44.228: INFO: Container started at 2020-08-16 23:29:26 +0000 UTC, pod became ready at 2020-08-16 23:29:42 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:29:44.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7657" for this suite. • [SLOW TEST:24.175 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":294,"completed":24,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:29:44.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 16 23:29:44.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6648' Aug 16 23:29:44.566: INFO: stderr: "" Aug 16 23:29:44.566: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 16 23:29:54.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6648 -o json' Aug 16 23:29:54.720: INFO: stderr: "" Aug 16 23:29:54.721: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-16T23:29:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-16T23:29:44Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.170\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-16T23:29:52Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6648\",\n \"resourceVersion\": \"528706\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6648/pods/e2e-test-httpd-pod\",\n \"uid\": \"dd416cd8-923d-47ea-9e3b-4c13a2149c9d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-pprpp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-pprpp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-pprpp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-16T23:29:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-16T23:29:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-16T23:29:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-16T23:29:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3d52269d156fa93eb465cbe329a3a1977ce6bf91e3ba62ebbf4840f8569b9553\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-16T23:29:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.170\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.170\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-16T23:29:44Z\"\n }\n}\n" STEP: replace the image in the pod Aug 16 23:29:54.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6648' Aug 16 23:29:55.524: INFO: stderr: "" Aug 16 23:29:55.524: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 Aug 16 23:29:55.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6648' Aug 16 23:30:09.993: INFO: stderr: "" Aug 16 23:30:09.993: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:30:09.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6648" for this suite. • [SLOW TEST:26.626 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1572 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":294,"completed":25,"skipped":418,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:30:10.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-0b019f3b-173e-4c37-a820-4d94e4801139 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:30:11.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3539" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":294,"completed":26,"skipped":430,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:30:12.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:30:12.737: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:30:15.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5136" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":294,"completed":27,"skipped":439,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:30:15.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:30:16.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d" in namespace "projected-6699" to be "Succeeded or Failed" Aug 16 23:30:16.410: INFO: Pod "downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 163.423038ms Aug 16 23:30:19.007: INFO: Pod "downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.760846899s Aug 16 23:30:21.053: INFO: Pod "downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.807011016s STEP: Saw pod success Aug 16 23:30:21.053: INFO: Pod "downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d" satisfied condition "Succeeded or Failed" Aug 16 23:30:21.055: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d container client-container: STEP: delete the pod Aug 16 23:30:21.769: INFO: Waiting for pod downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d to disappear Aug 16 23:30:21.771: INFO: Pod downwardapi-volume-fdec9098-dcfa-4b42-9083-fdc9390c6f1d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:30:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6699" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":28,"skipped":446,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:30:21.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 16 23:30:22.080: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:30:37.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-804" for this suite. • [SLOW TEST:16.261 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":294,"completed":29,"skipped":458,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:30:38.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 16 23:30:39.006: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 16 23:30:51.450: INFO: >>> kubeConfig: /root/.kube/config Aug 16 23:30:54.555: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:31:07.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6165" for this suite. • [SLOW TEST:29.635 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":294,"completed":30,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:31:07.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-b6c577a4-9d89-433f-b4ca-0001563fe0c6 STEP: Creating a pod to test consume secrets Aug 16 23:31:09.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5" in namespace "projected-3614" to be "Succeeded or Failed" Aug 16 23:31:09.654: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Pending", Reason="", readiness=false. Elapsed: 216.998295ms Aug 16 23:31:11.658: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220583292s Aug 16 23:31:13.678: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240464739s Aug 16 23:31:15.901: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463236277s Aug 16 23:31:18.348: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Running", Reason="", readiness=true. Elapsed: 8.910564792s Aug 16 23:31:21.553: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.115727252s STEP: Saw pod success Aug 16 23:31:21.553: INFO: Pod "pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5" satisfied condition "Succeeded or Failed" Aug 16 23:31:21.560: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5 container projected-secret-volume-test: STEP: delete the pod Aug 16 23:31:23.726: INFO: Waiting for pod pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5 to disappear Aug 16 23:31:24.073: INFO: Pod pod-projected-secrets-09606b74-f239-4343-9678-0289c4fa97e5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:31:24.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3614" for this suite. • [SLOW TEST:16.807 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":31,"skipped":510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:31:24.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:31:26.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:31:28.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217486, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:31:30.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217486, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:31:32.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217487, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217486, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:31:36.008: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:31:36.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3595-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:31:38.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3956" for this suite. STEP: Destroying namespace "webhook-3956-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.760 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":294,"completed":32,"skipped":587,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:31:41.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:31:50.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-379" for this suite. STEP: Destroying namespace "nsdeletetest-2609" for this suite. Aug 16 23:31:50.545: INFO: Namespace nsdeletetest-2609 was already deleted STEP: Destroying namespace "nsdeletetest-7580" for this suite. • [SLOW TEST:9.245 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":294,"completed":33,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:31:50.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8631 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8631 I0816 23:31:50.749940 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8631, replica count: 2 I0816 23:31:53.800347 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:31:56.800528 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 23:31:56.800: INFO: Creating new exec pod Aug 16 23:32:03.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpoddrknk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 16 23:32:04.571: INFO: stderr: "I0816 23:32:04.499996 458 log.go:181] (0xc00003b080) (0xc000814b40) Create stream\nI0816 23:32:04.500074 458 log.go:181] (0xc00003b080) (0xc000814b40) Stream added, broadcasting: 1\nI0816 23:32:04.501750 458 log.go:181] (0xc00003b080) Reply frame received for 1\nI0816 23:32:04.501780 458 log.go:181] (0xc00003b080) (0xc0007b60a0) Create stream\nI0816 23:32:04.501790 458 log.go:181] (0xc00003b080) (0xc0007b60a0) Stream added, broadcasting: 3\nI0816 23:32:04.502546 458 log.go:181] (0xc00003b080) Reply frame received for 3\nI0816 23:32:04.502569 458 log.go:181] (0xc00003b080) (0xc0005d40a0) Create stream\nI0816 23:32:04.502577 458 log.go:181] (0xc00003b080) (0xc0005d40a0) Stream added, broadcasting: 5\nI0816 23:32:04.503376 458 log.go:181] (0xc00003b080) Reply frame received for 5\nI0816 23:32:04.563452 458 log.go:181] (0xc00003b080) Data frame received for 5\nI0816 23:32:04.563494 458 log.go:181] (0xc0005d40a0) (5) Data frame handling\nI0816 23:32:04.563517 458 log.go:181] (0xc0005d40a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0816 23:32:04.563985 458 log.go:181] (0xc00003b080) Data frame received for 5\nI0816 23:32:04.564002 458 log.go:181] (0xc0005d40a0) (5) Data frame handling\nI0816 23:32:04.564025 458 log.go:181] (0xc0005d40a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0816 23:32:04.564297 458 log.go:181] (0xc00003b080) Data frame received for 3\nI0816 23:32:04.564313 458 log.go:181] (0xc0007b60a0) (3) Data frame handling\nI0816 23:32:04.565116 458 log.go:181] (0xc00003b080) Data frame received for 5\nI0816 23:32:04.565137 458 log.go:181] (0xc0005d40a0) (5) Data frame handling\nI0816 23:32:04.566720 458 log.go:181] (0xc00003b080) Data frame received for 1\nI0816 23:32:04.566773 458 log.go:181] (0xc000814b40) (1) Data frame handling\nI0816 23:32:04.566808 458 log.go:181] (0xc000814b40) (1) Data frame sent\nI0816 23:32:04.566873 458 log.go:181] (0xc00003b080) (0xc000814b40) Stream removed, broadcasting: 1\nI0816 23:32:04.567144 458 log.go:181] (0xc00003b080) (0xc000814b40) Stream removed, broadcasting: 1\nI0816 23:32:04.567184 458 log.go:181] (0xc00003b080) (0xc0007b60a0) Stream removed, broadcasting: 3\nI0816 23:32:04.567303 458 log.go:181] (0xc00003b080) (0xc0005d40a0) Stream removed, broadcasting: 5\n" Aug 16 23:32:04.572: INFO: stdout: "" Aug 16 23:32:04.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpoddrknk -- /bin/sh -x -c nc -zv -t -w 2 10.109.159.39 80' Aug 16 23:32:04.801: INFO: stderr: "I0816 23:32:04.743425 476 log.go:181] (0xc000d07760) (0xc000911360) Create stream\nI0816 23:32:04.743504 476 log.go:181] (0xc000d07760) (0xc000911360) Stream added, broadcasting: 1\nI0816 23:32:04.745821 476 log.go:181] (0xc000d07760) Reply frame received for 1\nI0816 23:32:04.745881 476 log.go:181] (0xc000d07760) (0xc000838280) Create stream\nI0816 23:32:04.745914 476 log.go:181] (0xc000d07760) (0xc000838280) Stream added, broadcasting: 3\nI0816 23:32:04.746756 476 log.go:181] (0xc000d07760) Reply frame received for 3\nI0816 23:32:04.746807 476 log.go:181] (0xc000d07760) (0xc000611c20) Create stream\nI0816 23:32:04.746830 476 log.go:181] (0xc000d07760) (0xc000611c20) Stream added, broadcasting: 5\nI0816 23:32:04.747478 476 log.go:181] (0xc000d07760) Reply frame received for 5\nI0816 23:32:04.794828 476 log.go:181] (0xc000d07760) Data frame received for 5\nI0816 23:32:04.794860 476 log.go:181] (0xc000611c20) (5) Data frame handling\nI0816 23:32:04.794878 476 log.go:181] (0xc000611c20) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.159.39 80\nConnection to 10.109.159.39 80 port [tcp/http] succeeded!\nI0816 23:32:04.794904 476 log.go:181] (0xc000d07760) Data frame received for 3\nI0816 23:32:04.794934 476 log.go:181] (0xc000838280) (3) Data frame handling\nI0816 23:32:04.794951 476 log.go:181] (0xc000d07760) Data frame received for 5\nI0816 23:32:04.794964 476 log.go:181] (0xc000611c20) (5) Data frame handling\nI0816 23:32:04.795800 476 log.go:181] (0xc000d07760) Data frame received for 1\nI0816 23:32:04.795817 476 log.go:181] (0xc000911360) (1) Data frame handling\nI0816 23:32:04.795838 476 log.go:181] (0xc000911360) (1) Data frame sent\nI0816 23:32:04.795857 476 log.go:181] (0xc000d07760) (0xc000911360) Stream removed, broadcasting: 1\nI0816 23:32:04.795997 476 log.go:181] (0xc000d07760) Go away received\nI0816 23:32:04.796190 476 log.go:181] (0xc000d07760) (0xc000911360) Stream removed, broadcasting: 1\nI0816 23:32:04.796213 476 log.go:181] (0xc000d07760) (0xc000838280) Stream removed, broadcasting: 3\nI0816 23:32:04.796228 476 log.go:181] (0xc000d07760) (0xc000611c20) Stream removed, broadcasting: 5\n" Aug 16 23:32:04.801: INFO: stdout: "" Aug 16 23:32:04.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpoddrknk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31598' Aug 16 23:32:05.992: INFO: stderr: "I0816 23:32:05.931203 492 log.go:181] (0xc000c95290) (0xc0007ed860) Create stream\nI0816 23:32:05.931244 492 log.go:181] (0xc000c95290) (0xc0007ed860) Stream added, broadcasting: 1\nI0816 23:32:05.932630 492 log.go:181] (0xc000c95290) Reply frame received for 1\nI0816 23:32:05.932654 492 log.go:181] (0xc000c95290) (0xc0007fa320) Create stream\nI0816 23:32:05.932660 492 log.go:181] (0xc000c95290) (0xc0007fa320) Stream added, broadcasting: 3\nI0816 23:32:05.933197 492 log.go:181] (0xc000c95290) Reply frame received for 3\nI0816 23:32:05.933218 492 log.go:181] (0xc000c95290) (0xc0007fac80) Create stream\nI0816 23:32:05.933226 492 log.go:181] (0xc000c95290) (0xc0007fac80) Stream added, broadcasting: 5\nI0816 23:32:05.933679 492 log.go:181] (0xc000c95290) Reply frame received for 5\nI0816 23:32:05.986891 492 log.go:181] (0xc000c95290) Data frame received for 3\nI0816 23:32:05.986928 492 log.go:181] (0xc0007fa320) (3) Data frame handling\nI0816 23:32:05.986958 492 log.go:181] (0xc000c95290) Data frame received for 5\nI0816 23:32:05.986979 492 log.go:181] (0xc0007fac80) (5) Data frame handling\nI0816 23:32:05.986995 492 log.go:181] (0xc0007fac80) (5) Data frame sent\nI0816 23:32:05.987002 492 log.go:181] (0xc000c95290) Data frame received for 5\nI0816 23:32:05.987008 492 log.go:181] (0xc0007fac80) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31598\nConnection to 172.18.0.11 31598 port [tcp/31598] succeeded!\nI0816 23:32:05.987829 492 log.go:181] (0xc000c95290) Data frame received for 1\nI0816 23:32:05.987847 492 log.go:181] (0xc0007ed860) (1) Data frame handling\nI0816 23:32:05.987901 492 log.go:181] (0xc0007ed860) (1) Data frame sent\nI0816 23:32:05.987920 492 log.go:181] (0xc000c95290) (0xc0007ed860) Stream removed, broadcasting: 1\nI0816 23:32:05.988020 492 log.go:181] (0xc000c95290) Go away received\nI0816 23:32:05.988227 492 log.go:181] (0xc000c95290) (0xc0007ed860) Stream removed, broadcasting: 1\nI0816 23:32:05.988242 492 log.go:181] (0xc000c95290) (0xc0007fa320) Stream removed, broadcasting: 3\nI0816 23:32:05.988251 492 log.go:181] (0xc000c95290) (0xc0007fac80) Stream removed, broadcasting: 5\n" Aug 16 23:32:05.992: INFO: stdout: "" Aug 16 23:32:05.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpoddrknk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31598' Aug 16 23:32:06.834: INFO: stderr: "I0816 23:32:06.773913 508 log.go:181] (0xc0006374a0) (0xc000c12b40) Create stream\nI0816 23:32:06.773949 508 log.go:181] (0xc0006374a0) (0xc000c12b40) Stream added, broadcasting: 1\nI0816 23:32:06.777465 508 log.go:181] (0xc0006374a0) Reply frame received for 1\nI0816 23:32:06.777535 508 log.go:181] (0xc0006374a0) (0xc00051c460) Create stream\nI0816 23:32:06.777549 508 log.go:181] (0xc0006374a0) (0xc00051c460) Stream added, broadcasting: 3\nI0816 23:32:06.778197 508 log.go:181] (0xc0006374a0) Reply frame received for 3\nI0816 23:32:06.778217 508 log.go:181] (0xc0006374a0) (0xc00051ca00) Create stream\nI0816 23:32:06.778223 508 log.go:181] (0xc0006374a0) (0xc00051ca00) Stream added, broadcasting: 5\nI0816 23:32:06.778856 508 log.go:181] (0xc0006374a0) Reply frame received for 5\nI0816 23:32:06.829916 508 log.go:181] (0xc0006374a0) Data frame received for 5\nI0816 23:32:06.829940 508 log.go:181] (0xc00051ca00) (5) Data frame handling\nI0816 23:32:06.829955 508 log.go:181] (0xc00051ca00) (5) Data frame sent\nI0816 23:32:06.829962 508 log.go:181] (0xc0006374a0) Data frame received for 5\nI0816 23:32:06.829969 508 log.go:181] (0xc00051ca00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31598\nConnection to 172.18.0.14 31598 port [tcp/31598] succeeded!\nI0816 23:32:06.830088 508 log.go:181] (0xc0006374a0) Data frame received for 3\nI0816 23:32:06.830119 508 log.go:181] (0xc00051c460) (3) Data frame handling\nI0816 23:32:06.831330 508 log.go:181] (0xc0006374a0) Data frame received for 1\nI0816 23:32:06.831340 508 log.go:181] (0xc000c12b40) (1) Data frame handling\nI0816 23:32:06.831348 508 log.go:181] (0xc000c12b40) (1) Data frame sent\nI0816 23:32:06.831360 508 log.go:181] (0xc0006374a0) (0xc000c12b40) Stream removed, broadcasting: 1\nI0816 23:32:06.831369 508 log.go:181] (0xc0006374a0) Go away received\nI0816 23:32:06.831637 508 log.go:181] (0xc0006374a0) (0xc000c12b40) Stream removed, broadcasting: 1\nI0816 23:32:06.831648 508 log.go:181] (0xc0006374a0) (0xc00051c460) Stream removed, broadcasting: 3\nI0816 23:32:06.831653 508 log.go:181] (0xc0006374a0) (0xc00051ca00) Stream removed, broadcasting: 5\n" Aug 16 23:32:06.835: INFO: stdout: "" Aug 16 23:32:06.835: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:32:07.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8631" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:16.655 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":294,"completed":34,"skipped":626,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:32:07.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 16 23:32:08.403: INFO: Waiting up to 5m0s for pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede" in namespace "emptydir-1365" to be "Succeeded or Failed" Aug 16 23:32:08.482: INFO: Pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede": Phase="Pending", Reason="", readiness=false. Elapsed: 79.478543ms Aug 16 23:32:10.493: INFO: Pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09005683s Aug 16 23:32:12.499: INFO: Pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095971223s Aug 16 23:32:14.702: INFO: Pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299509769s STEP: Saw pod success Aug 16 23:32:14.702: INFO: Pod "pod-3fa5b9fa-a436-4265-9a83-315c898efede" satisfied condition "Succeeded or Failed" Aug 16 23:32:14.719: INFO: Trying to get logs from node latest-worker2 pod pod-3fa5b9fa-a436-4265-9a83-315c898efede container test-container: STEP: delete the pod Aug 16 23:32:15.150: INFO: Waiting for pod pod-3fa5b9fa-a436-4265-9a83-315c898efede to disappear Aug 16 23:32:15.153: INFO: Pod pod-3fa5b9fa-a436-4265-9a83-315c898efede no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:32:15.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1365" for this suite. • [SLOW TEST:8.133 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":35,"skipped":635,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:32:15.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-cccea4fb-cb15-492c-86a2-1722333fd124 STEP: Creating a pod to test consume secrets Aug 16 23:32:15.775: INFO: Waiting up to 5m0s for pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9" in namespace "secrets-7462" to be "Succeeded or Failed" Aug 16 23:32:16.187: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 411.92049ms Aug 16 23:32:19.135: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.360707664s Aug 16 23:32:21.283: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508340138s Aug 16 23:32:23.445: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9": Phase="Running", Reason="", readiness=true. Elapsed: 7.670753046s Aug 16 23:32:25.448: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.673684559s STEP: Saw pod success Aug 16 23:32:25.448: INFO: Pod "pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9" satisfied condition "Succeeded or Failed" Aug 16 23:32:25.450: INFO: Trying to get logs from node latest-worker pod pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9 container secret-env-test: STEP: delete the pod Aug 16 23:32:25.542: INFO: Waiting for pod pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9 to disappear Aug 16 23:32:25.553: INFO: Pod pod-secrets-457fb8a6-42ac-4ed5-b4ac-b3b0ee7f3fd9 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:32:25.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7462" for this suite. • [SLOW TEST:10.221 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":294,"completed":36,"skipped":640,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:32:25.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 16 23:32:33.685: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9245 PodName:var-expansion-d140c4d7-4f2e-49e4-8f30-c8a1132e4122 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:32:33.685: INFO: >>> kubeConfig: /root/.kube/config I0816 23:32:33.716713 7 log.go:181] (0xc0034d2a50) (0xc0035f6320) Create stream I0816 23:32:33.716815 7 log.go:181] (0xc0034d2a50) (0xc0035f6320) Stream added, broadcasting: 1 I0816 23:32:33.718453 7 log.go:181] (0xc0034d2a50) Reply frame received for 1 I0816 23:32:33.718480 7 log.go:181] (0xc0034d2a50) (0xc00310a6e0) Create stream I0816 23:32:33.718490 7 log.go:181] (0xc0034d2a50) (0xc00310a6e0) Stream added, broadcasting: 3 I0816 23:32:33.719033 7 log.go:181] (0xc0034d2a50) Reply frame received for 3 I0816 23:32:33.719058 7 log.go:181] (0xc0034d2a50) (0xc001a83a40) Create stream I0816 23:32:33.719067 7 log.go:181] (0xc0034d2a50) (0xc001a83a40) Stream added, broadcasting: 5 I0816 23:32:33.719715 7 log.go:181] (0xc0034d2a50) Reply frame received for 5 I0816 23:32:33.771679 7 log.go:181] (0xc0034d2a50) Data frame received for 5 I0816 23:32:33.771739 7 log.go:181] (0xc001a83a40) (5) Data frame handling I0816 23:32:33.771774 7 log.go:181] (0xc0034d2a50) Data frame received for 3 I0816 23:32:33.771793 7 log.go:181] (0xc00310a6e0) (3) Data frame handling I0816 23:32:33.773263 7 log.go:181] (0xc0034d2a50) Data frame received for 1 I0816 23:32:33.773298 7 log.go:181] (0xc0035f6320) (1) Data frame handling I0816 23:32:33.773336 7 log.go:181] (0xc0035f6320) (1) Data frame sent I0816 23:32:33.773373 7 log.go:181] (0xc0034d2a50) (0xc0035f6320) Stream removed, broadcasting: 1 I0816 23:32:33.773414 7 log.go:181] (0xc0034d2a50) Go away received I0816 23:32:33.773533 7 log.go:181] (0xc0034d2a50) (0xc0035f6320) Stream removed, broadcasting: 1 I0816 23:32:33.773559 7 log.go:181] (0xc0034d2a50) (0xc00310a6e0) Stream removed, broadcasting: 3 I0816 23:32:33.773578 7 log.go:181] (0xc0034d2a50) (0xc001a83a40) Stream removed, broadcasting: 5 STEP: test for file in mounted path Aug 16 23:32:33.776: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9245 PodName:var-expansion-d140c4d7-4f2e-49e4-8f30-c8a1132e4122 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:32:33.776: INFO: >>> kubeConfig: /root/.kube/config I0816 23:32:33.811199 7 log.go:181] (0xc002fa8bb0) (0xc0036a60a0) Create stream I0816 23:32:33.811224 7 log.go:181] (0xc002fa8bb0) (0xc0036a60a0) Stream added, broadcasting: 1 I0816 23:32:33.813124 7 log.go:181] (0xc002fa8bb0) Reply frame received for 1 I0816 23:32:33.813164 7 log.go:181] (0xc002fa8bb0) (0xc0033870e0) Create stream I0816 23:32:33.813176 7 log.go:181] (0xc002fa8bb0) (0xc0033870e0) Stream added, broadcasting: 3 I0816 23:32:33.813831 7 log.go:181] (0xc002fa8bb0) Reply frame received for 3 I0816 23:32:33.813848 7 log.go:181] (0xc002fa8bb0) (0xc003387180) Create stream I0816 23:32:33.813855 7 log.go:181] (0xc002fa8bb0) (0xc003387180) Stream added, broadcasting: 5 I0816 23:32:33.814363 7 log.go:181] (0xc002fa8bb0) Reply frame received for 5 I0816 23:32:33.876799 7 log.go:181] (0xc002fa8bb0) Data frame received for 5 I0816 23:32:33.876827 7 log.go:181] (0xc003387180) (5) Data frame handling I0816 23:32:33.876841 7 log.go:181] (0xc002fa8bb0) Data frame received for 3 I0816 23:32:33.876849 7 log.go:181] (0xc0033870e0) (3) Data frame handling I0816 23:32:33.877495 7 log.go:181] (0xc002fa8bb0) Data frame received for 1 I0816 23:32:33.877517 7 log.go:181] (0xc0036a60a0) (1) Data frame handling I0816 23:32:33.877533 7 log.go:181] (0xc0036a60a0) (1) Data frame sent I0816 23:32:33.877559 7 log.go:181] (0xc002fa8bb0) (0xc0036a60a0) Stream removed, broadcasting: 1 I0816 23:32:33.877586 7 log.go:181] (0xc002fa8bb0) Go away received I0816 23:32:33.877673 7 log.go:181] (0xc002fa8bb0) (0xc0036a60a0) Stream removed, broadcasting: 1 I0816 23:32:33.877697 7 log.go:181] (0xc002fa8bb0) (0xc0033870e0) Stream removed, broadcasting: 3 I0816 23:32:33.877714 7 log.go:181] (0xc002fa8bb0) (0xc003387180) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 16 23:32:34.399: INFO: Successfully updated pod "var-expansion-d140c4d7-4f2e-49e4-8f30-c8a1132e4122" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 16 23:32:34.429: INFO: Deleting pod "var-expansion-d140c4d7-4f2e-49e4-8f30-c8a1132e4122" in namespace "var-expansion-9245" Aug 16 23:32:34.432: INFO: Wait up to 5m0s for pod "var-expansion-d140c4d7-4f2e-49e4-8f30-c8a1132e4122" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:33:20.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9245" for this suite. • [SLOW TEST:55.233 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":294,"completed":37,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:33:20.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ceb32add-88db-4bcf-987a-c3a71f4b1763 STEP: Creating a pod to test consume configMaps Aug 16 23:33:21.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6" in namespace "configmap-4323" to be "Succeeded or Failed" Aug 16 23:33:21.445: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 91.146013ms Aug 16 23:33:23.643: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289289722s Aug 16 23:33:25.661: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307076069s Aug 16 23:33:27.871: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516904986s Aug 16 23:33:29.874: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.520261849s STEP: Saw pod success Aug 16 23:33:29.874: INFO: Pod "pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6" satisfied condition "Succeeded or Failed" Aug 16 23:33:29.876: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6 container configmap-volume-test: STEP: delete the pod Aug 16 23:33:30.013: INFO: Waiting for pod pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6 to disappear Aug 16 23:33:30.053: INFO: Pod pod-configmaps-0a3f696b-0034-40c5-826c-30590fdcc8f6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:33:30.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4323" for this suite. • [SLOW TEST:9.385 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":38,"skipped":661,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:33:30.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:33:30.235: INFO: Creating deployment "webserver-deployment" Aug 16 23:33:30.283: INFO: Waiting for observed generation 1 Aug 16 23:33:32.689: INFO: Waiting for all required pods to come up Aug 16 23:33:32.694: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 16 23:33:45.159: INFO: Waiting for deployment "webserver-deployment" to complete Aug 16 23:33:45.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217624, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217624, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217624, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733217610, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:33:47.230: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 16 23:33:47.239: INFO: Updating deployment webserver-deployment Aug 16 23:33:47.239: INFO: Waiting for observed generation 2 Aug 16 23:33:50.186: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 16 23:33:50.189: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 16 23:33:50.560: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 16 23:33:50.722: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 16 23:33:50.722: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 16 23:33:50.725: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 16 23:33:50.730: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 16 23:33:50.730: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 16 23:33:50.744: INFO: Updating deployment webserver-deployment Aug 16 23:33:50.744: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 16 23:33:51.380: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 16 23:33:51.643: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 16 23:33:51.795: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9883 /apis/apps/v1/namespaces/deployment-9883/deployments/webserver-deployment 2a6cf0c6-a9d6-4ec9-9737-711bc9fc505c 530573 3 2020-08-16 23:33:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-16 23:33:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00316d2b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-16 23:33:48 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-16 23:33:51 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 16 23:33:51.991: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9883 /apis/apps/v1/namespaces/deployment-9883/replicasets/webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 530621 3 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2a6cf0c6-a9d6-4ec9-9737-711bc9fc505c 0xc00316d757 0xc00316d758}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a6cf0c6-a9d6-4ec9-9737-711bc9fc505c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00316d7d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:33:51.991: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 16 23:33:51.991: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9883 /apis/apps/v1/namespaces/deployment-9883/replicasets/webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 530627 3 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2a6cf0c6-a9d6-4ec9-9737-711bc9fc505c 0xc00316d837 0xc00316d838}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a6cf0c6-a9d6-4ec9-9737-711bc9fc505c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00316d8a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:33:52.017: INFO: Pod "webserver-deployment-795d758f88-2xsfm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2xsfm webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-2xsfm 9ebf015a-b754-44a7-9dba-27436e9b5883 530518 0 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc00316ddd7 0xc00316ddd8}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-16 23:33:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.017: INFO: Pod "webserver-deployment-795d758f88-9hpkp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9hpkp webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-9hpkp 452b1a40-8ff7-4290-9677-0b9d9768f4ac 530632 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc00316df90 0xc00316df91}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-16 23:33:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.017: INFO: Pod "webserver-deployment-795d758f88-cj55b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cj55b webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-cj55b ed72d534-2ca1-459c-a0bf-32c074789407 530610 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8130 0xc0036c8131}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.017: INFO: Pod "webserver-deployment-795d758f88-cr9wc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cr9wc webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-cr9wc 1898190f-474d-4093-9fed-ee5b4bf3f37f 530531 0 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8270 0xc0036c8271}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-16 23:33:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-cs2vn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cs2vn webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-cs2vn 732972bc-8fc0-4a01-9a69-5d6b2236c004 530545 0 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8420 0xc0036c8421}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-16 23:33:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-fn8t8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fn8t8 webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-fn8t8 da6f1c33-4aaf-40d6-88c2-226c2aa0f84d 530617 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c85c0 0xc0036c85c1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-gqhr5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gqhr5 webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-gqhr5 7cc5f0aa-c366-4c7c-b59d-480edb1c7cdb 530547 0 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8700 0xc0036c8701}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-16 23:33:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-k585g" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-k585g webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-k585g 2e22c998-4678-46c6-a617-30cdb2920d99 530585 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8b30 0xc0036c8b31}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-rqcdn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rqcdn webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-rqcdn af50aee7-b6bf-42a2-b410-95dcbb9365c9 530609 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8c70 0xc0036c8c71}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-snkbg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-snkbg webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-snkbg cab0a0d9-741b-46ba-8243-2e37da3a3aa9 530602 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8db0 0xc0036c8db1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.018: INFO: Pod "webserver-deployment-795d758f88-vbtzp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vbtzp webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-vbtzp 2c3f36d8-e4a3-4c30-abc9-daca5c698cdf 530583 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c8ef0 0xc0036c8ef1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-795d758f88-vxbgv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vxbgv webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-vxbgv 50bc4abd-6819-4894-ac59-f43690a81681 530599 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c9030 0xc0036c9031}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-795d758f88-wmzv2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wmzv2 webserver-deployment-795d758f88- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-795d758f88-wmzv2 cbf01c28-cf6e-49c4-833b-e023e30b0379 530521 0 2020-08-16 23:33:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 c1906f30-46f4-4eb1-b3f9-11e16f10a66d 0xc0036c9170 0xc0036c9171}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c1906f30-46f4-4eb1-b3f9-11e16f10a66d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-16 23:33:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-dd94f59b7-56kwd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-56kwd webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-56kwd cb6b9738-68d5-4dd5-9861-7c0b6bd42f0e 530615 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9320 0xc0036c9321}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-dd94f59b7-6fh64" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6fh64 webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-6fh64 cc46fa5d-7a99-42e7-b1ba-6d5c8953aacc 530465 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9460 0xc0036c9461}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.181,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9a0f1ead3bccd18f4b729d2b1a90c6d047b5c635606f715c5e068d9a04a0b2ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-dd94f59b7-746k9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-746k9 webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-746k9 3af75063-9e21-476c-8131-8d6a84b830fd 530447 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9617 0xc0036c9618}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.184,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ae66e543c93ff46a991e3787fe4328f9f47c9910558095f5568187afb7480eac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.019: INFO: Pod "webserver-deployment-dd94f59b7-bw5gv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bw5gv webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-bw5gv 923f0ca1-6cce-42e1-af48-ff8a07638f5c 530587 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9817 0xc0036c9818}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-fcgl5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fcgl5 webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-fcgl5 1de5edd0-00f4-4302-ad5c-db58070f3ff4 530574 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9960 0xc0036c9961}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-h59gh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h59gh webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-h59gh 36eb2b8f-41c4-4b61-8bd4-ef805c6cf43b 530589 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9a90 0xc0036c9a91}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-jsfxm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jsfxm webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-jsfxm 88b3b01f-c5c2-42bd-ab63-34d02b8c6abd 530458 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9bf0 0xc0036c9bf1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.180,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://97671e066a5d70476c7a7a84a9df185accb9c11ed6aa1f2a197c88b472c88458,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-kjdpc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kjdpc webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-kjdpc ca6d8b3d-2b25-47af-9fd7-3f52c5b3daa7 530613 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9d97 0xc0036c9d98}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-klhnw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-klhnw webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-klhnw c912e58c-c057-4fc2-95f8-c55647e1be9d 530634 0 2020-08-16 23:33:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0036c9ec0 0xc0036c9ec1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-16 23:33:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.020: INFO: Pod "webserver-deployment-dd94f59b7-ksvsn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ksvsn webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-ksvsn cdf19484-0a5c-4f78-af4d-807279995c33 530612 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003734177 0xc003734178}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-mspdn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mspdn webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-mspdn ca151079-5356-4b94-9a13-91a1deb5a8a4 530455 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0037343c0 0xc0037343c1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.186\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.186,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://044995de9285663b5185c6a669370171bb9e0c2ed15494f0a7af3cf7d34c1005,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-ndzct" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ndzct webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-ndzct 4e05de9d-76b4-4ed0-a690-20526ba1324d 530463 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003734907 0xc003734908}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.187,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d6df73d71c1f423abff74007f9bcf4cf255fb1e59317bcf2f7a85cc96f3d091a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-nrkgn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nrkgn webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-nrkgn 2f287979-bc94-43dd-8085-ea6a78f3cdf6 530588 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003734d17 0xc003734d18}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-plpr4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-plpr4 webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-plpr4 1e54e89e-2227-487d-bf16-544e838d1a27 530413 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003734ff0 0xc003734ff1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.183,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1bea3b449cebc411e3bde190fb5060384dded656b981e9cb2aecbad544571953,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-qj28m" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qj28m webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-qj28m 45879cfc-5e4a-4e43-b525-e9e331163077 530614 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003735277 0xc003735278}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-qwwht" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qwwht webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-qwwht 17fb7cee-d307-4629-b003-8bd0591186cc 530575 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0037357b0 0xc0037357b1}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.021: INFO: Pod "webserver-deployment-dd94f59b7-s7sx2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s7sx2 webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-s7sx2 0e018138-3d32-4be8-93f4-e485bca68703 530616 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc003735d10 0xc003735d11}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.022: INFO: Pod "webserver-deployment-dd94f59b7-szd8b" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-szd8b webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-szd8b a8469837-0749-4057-a190-62b45f8cb5b1 530595 0 2020-08-16 23:33:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0032a0020 0xc0032a0021}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.022: INFO: Pod "webserver-deployment-dd94f59b7-z95wg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z95wg webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-z95wg d180ceb2-4614-473f-9e02-ce3828aea67b 530427 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0032a0220 0xc0032a0221}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.178,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ee9f37714feda6e0767b6e1f5140084d71753a62e8e91f40b65356c58bc5505,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:33:52.022: INFO: Pod "webserver-deployment-dd94f59b7-zfcxn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zfcxn webserver-deployment-dd94f59b7- deployment-9883 /api/v1/namespaces/deployment-9883/pods/webserver-deployment-dd94f59b7-zfcxn 50ae07cc-128d-4b7f-b951-82c40c035904 530439 0 2020-08-16 23:33:30 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8919296f-8d63-4608-ab3c-30b1c2ed54c5 0xc0032a04e7 0xc0032a04e8}] [] [{kube-controller-manager Update v1 2020-08-16 23:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8919296f-8d63-4608-ab3c-30b1c2ed54c5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:33:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.185\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dnsvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dnsvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dnsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:33:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.185,StartTime:2020-08-16 23:33:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:33:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a42f88de062e287a90483b4606e958c439cf76efb74c3332aa96d9f7006b7b5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:33:52.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9883" for this suite. • [SLOW TEST:22.284 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":294,"completed":39,"skipped":665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:33:52.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5815 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5815;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5815 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5815;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5815.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5815.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5815.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5815.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.8_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5815 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5815;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5815 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5815;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5815.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5815.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5815.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5815.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.8_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 23:34:24.242: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.284: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.411: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.481: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.487: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.644: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.661: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.666: INFO: Unable to read jessie_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.710: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.739: INFO: Unable to read jessie_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.840: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:24.844: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:25.616: INFO: Lookups using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5815 wheezy_tcp@dns-test-service.dns-5815 wheezy_udp@dns-test-service.dns-5815.svc wheezy_tcp@dns-test-service.dns-5815.svc wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5815 jessie_tcp@dns-test-service.dns-5815 jessie_udp@dns-test-service.dns-5815.svc jessie_tcp@dns-test-service.dns-5815.svc jessie_udp@_http._tcp.dns-test-service.dns-5815.svc jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc] Aug 16 23:34:30.662: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.665: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.671: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.675: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.677: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.680: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.758: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.760: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.793: INFO: Unable to read jessie_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.798: INFO: Unable to read jessie_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.800: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.803: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.806: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:30.819: INFO: Lookups using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5815 wheezy_tcp@dns-test-service.dns-5815 wheezy_udp@dns-test-service.dns-5815.svc wheezy_tcp@dns-test-service.dns-5815.svc wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5815 jessie_tcp@dns-test-service.dns-5815 jessie_udp@dns-test-service.dns-5815.svc jessie_tcp@dns-test-service.dns-5815.svc jessie_udp@_http._tcp.dns-test-service.dns-5815.svc jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc] Aug 16 23:34:35.620: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.624: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.640: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.643: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.666: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.668: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.672: INFO: Unable to read jessie_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.685: INFO: Unable to read jessie_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.693: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:35.706: INFO: Lookups using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5815 wheezy_tcp@dns-test-service.dns-5815 wheezy_udp@dns-test-service.dns-5815.svc wheezy_tcp@dns-test-service.dns-5815.svc wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5815 jessie_tcp@dns-test-service.dns-5815 jessie_udp@dns-test-service.dns-5815.svc jessie_tcp@dns-test-service.dns-5815.svc jessie_udp@_http._tcp.dns-test-service.dns-5815.svc jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc] Aug 16 23:34:40.650: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.653: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.686: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.688: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.695: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.859: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.862: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.865: INFO: Unable to read jessie_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.869: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.873: INFO: Unable to read jessie_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.880: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.884: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:40.897: INFO: Lookups using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5815 wheezy_tcp@dns-test-service.dns-5815 wheezy_udp@dns-test-service.dns-5815.svc wheezy_tcp@dns-test-service.dns-5815.svc wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5815 jessie_tcp@dns-test-service.dns-5815 jessie_udp@dns-test-service.dns-5815.svc jessie_tcp@dns-test-service.dns-5815.svc jessie_udp@_http._tcp.dns-test-service.dns-5815.svc jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc] Aug 16 23:34:46.151: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:46.640: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:46.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:46.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815 from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:47.300: INFO: Unable to read wheezy_udp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:48.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:48.393: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:49.575: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:52.343: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:52.346: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc from pod dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527: the server could not find the requested resource (get pods dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527) Aug 16 23:34:52.555: INFO: Lookups using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5815 wheezy_tcp@dns-test-service.dns-5815 wheezy_udp@dns-test-service.dns-5815.svc wheezy_tcp@dns-test-service.dns-5815.svc wheezy_udp@_http._tcp.dns-test-service.dns-5815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5815.svc jessie_udp@_http._tcp.dns-test-service.dns-5815.svc jessie_tcp@_http._tcp.dns-test-service.dns-5815.svc] Aug 16 23:34:59.933: INFO: DNS probes using dns-5815/dns-test-b12eba54-b8ce-4b67-b457-8d73f92d8527 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:35:04.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5815" for this suite. • [SLOW TEST:72.776 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":294,"completed":40,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:35:05.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:35:06.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b" in namespace "projected-8823" to be "Succeeded or Failed" Aug 16 23:35:06.208: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 205.205377ms Aug 16 23:35:08.562: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558965999s Aug 16 23:35:11.526: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.523767061s Aug 16 23:35:13.567: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.564152351s Aug 16 23:35:16.238: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.235584684s Aug 16 23:35:18.627: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.623953313s Aug 16 23:35:20.679: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.676108829s Aug 16 23:35:22.723: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.719917625s Aug 16 23:35:24.872: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Running", Reason="", readiness=true. Elapsed: 18.869416007s Aug 16 23:35:26.876: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.873711796s STEP: Saw pod success Aug 16 23:35:26.876: INFO: Pod "downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b" satisfied condition "Succeeded or Failed" Aug 16 23:35:26.879: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b container client-container: STEP: delete the pod Aug 16 23:35:27.101: INFO: Waiting for pod downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b to disappear Aug 16 23:35:27.117: INFO: Pod downwardapi-volume-a147ef7e-543b-443f-a91f-55befc96182b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:35:27.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8823" for this suite. • [SLOW TEST:21.882 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":41,"skipped":771,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:35:27.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9404 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9404 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9404 Aug 16 23:35:30.597: INFO: Found 0 stateful pods, waiting for 1 Aug 16 23:35:40.707: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Aug 16 23:35:50.609: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 16 23:35:50.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:35:57.354: INFO: stderr: "I0816 23:35:57.212664 526 log.go:181] (0xc000a62160) (0xc000bcce60) Create stream\nI0816 23:35:57.212808 526 log.go:181] (0xc000a62160) (0xc000bcce60) Stream added, broadcasting: 1\nI0816 23:35:57.214711 526 log.go:181] (0xc000a62160) Reply frame received for 1\nI0816 23:35:57.214773 526 log.go:181] (0xc000a62160) (0xc000bc2780) Create stream\nI0816 23:35:57.214795 526 log.go:181] (0xc000a62160) (0xc000bc2780) Stream added, broadcasting: 3\nI0816 23:35:57.215682 526 log.go:181] (0xc000a62160) Reply frame received for 3\nI0816 23:35:57.215709 526 log.go:181] (0xc000a62160) (0xc000bc2c80) Create stream\nI0816 23:35:57.215728 526 log.go:181] (0xc000a62160) (0xc000bc2c80) Stream added, broadcasting: 5\nI0816 23:35:57.216546 526 log.go:181] (0xc000a62160) Reply frame received for 5\nI0816 23:35:57.283818 526 log.go:181] (0xc000a62160) Data frame received for 5\nI0816 23:35:57.283856 526 log.go:181] (0xc000bc2c80) (5) Data frame handling\nI0816 23:35:57.283882 526 log.go:181] (0xc000bc2c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:35:57.342731 526 log.go:181] (0xc000a62160) Data frame received for 3\nI0816 23:35:57.342767 526 log.go:181] (0xc000bc2780) (3) Data frame handling\nI0816 23:35:57.342789 526 log.go:181] (0xc000bc2780) (3) Data frame sent\nI0816 23:35:57.342798 526 log.go:181] (0xc000a62160) Data frame received for 3\nI0816 23:35:57.342805 526 log.go:181] (0xc000bc2780) (3) Data frame handling\nI0816 23:35:57.342967 526 log.go:181] (0xc000a62160) Data frame received for 5\nI0816 23:35:57.342999 526 log.go:181] (0xc000bc2c80) (5) Data frame handling\nI0816 23:35:57.346363 526 log.go:181] (0xc000a62160) Data frame received for 1\nI0816 23:35:57.346407 526 log.go:181] (0xc000bcce60) (1) Data frame handling\nI0816 23:35:57.346435 526 log.go:181] (0xc000bcce60) (1) Data frame sent\nI0816 23:35:57.346461 526 log.go:181] (0xc000a62160) (0xc000bcce60) Stream removed, broadcasting: 1\nI0816 23:35:57.346875 526 log.go:181] (0xc000a62160) Go away received\nI0816 23:35:57.346980 526 log.go:181] (0xc000a62160) (0xc000bcce60) Stream removed, broadcasting: 1\nI0816 23:35:57.347021 526 log.go:181] (0xc000a62160) (0xc000bc2780) Stream removed, broadcasting: 3\nI0816 23:35:57.347046 526 log.go:181] (0xc000a62160) (0xc000bc2c80) Stream removed, broadcasting: 5\n" Aug 16 23:35:57.354: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:35:57.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:35:57.359: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 16 23:36:07.370: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:36:07.370: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:36:07.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999637s Aug 16 23:36:08.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.713978146s Aug 16 23:36:10.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.491630514s Aug 16 23:36:11.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.30619434s Aug 16 23:36:12.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.301259569s Aug 16 23:36:13.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.296510297s Aug 16 23:36:14.281: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.286306665s Aug 16 23:36:15.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.15668006s Aug 16 23:36:16.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.152918696s Aug 16 23:36:17.292: INFO: Verifying statefulset ss doesn't scale past 1 for another 149.85221ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9404 Aug 16 23:36:18.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:36:18.512: INFO: stderr: "I0816 23:36:18.424402 544 log.go:181] (0xc000eab4a0) (0xc000be59a0) Create stream\nI0816 23:36:18.424462 544 log.go:181] (0xc000eab4a0) (0xc000be59a0) Stream added, broadcasting: 1\nI0816 23:36:18.428635 544 log.go:181] (0xc000eab4a0) Reply frame received for 1\nI0816 23:36:18.428654 544 log.go:181] (0xc000eab4a0) (0xc0009d8aa0) Create stream\nI0816 23:36:18.428661 544 log.go:181] (0xc000eab4a0) (0xc0009d8aa0) Stream added, broadcasting: 3\nI0816 23:36:18.429648 544 log.go:181] (0xc000eab4a0) Reply frame received for 3\nI0816 23:36:18.429679 544 log.go:181] (0xc000eab4a0) (0xc0003c45a0) Create stream\nI0816 23:36:18.429690 544 log.go:181] (0xc000eab4a0) (0xc0003c45a0) Stream added, broadcasting: 5\nI0816 23:36:18.430465 544 log.go:181] (0xc000eab4a0) Reply frame received for 5\nI0816 23:36:18.499153 544 log.go:181] (0xc000eab4a0) Data frame received for 5\nI0816 23:36:18.499183 544 log.go:181] (0xc0003c45a0) (5) Data frame handling\nI0816 23:36:18.499208 544 log.go:181] (0xc0003c45a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 23:36:18.504439 544 log.go:181] (0xc000eab4a0) Data frame received for 3\nI0816 23:36:18.504450 544 log.go:181] (0xc0009d8aa0) (3) Data frame handling\nI0816 23:36:18.504464 544 log.go:181] (0xc0009d8aa0) (3) Data frame sent\nI0816 23:36:18.504468 544 log.go:181] (0xc000eab4a0) Data frame received for 3\nI0816 23:36:18.504473 544 log.go:181] (0xc0009d8aa0) (3) Data frame handling\nI0816 23:36:18.504602 544 log.go:181] (0xc000eab4a0) Data frame received for 5\nI0816 23:36:18.504625 544 log.go:181] (0xc0003c45a0) (5) Data frame handling\nI0816 23:36:18.505873 544 log.go:181] (0xc000eab4a0) Data frame received for 1\nI0816 23:36:18.505888 544 log.go:181] (0xc000be59a0) (1) Data frame handling\nI0816 23:36:18.505897 544 log.go:181] (0xc000be59a0) (1) Data frame sent\nI0816 23:36:18.505905 544 log.go:181] (0xc000eab4a0) (0xc000be59a0) Stream removed, broadcasting: 1\nI0816 23:36:18.506089 544 log.go:181] (0xc000eab4a0) Go away received\nI0816 23:36:18.506172 544 log.go:181] (0xc000eab4a0) (0xc000be59a0) Stream removed, broadcasting: 1\nI0816 23:36:18.506184 544 log.go:181] (0xc000eab4a0) (0xc0009d8aa0) Stream removed, broadcasting: 3\nI0816 23:36:18.506195 544 log.go:181] (0xc000eab4a0) (0xc0003c45a0) Stream removed, broadcasting: 5\n" Aug 16 23:36:18.512: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:36:18.512: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:36:18.534: INFO: Found 1 stateful pods, waiting for 3 Aug 16 23:36:28.538: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 23:36:28.538: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 23:36:28.538: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 16 23:36:28.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:36:28.765: INFO: stderr: "I0816 23:36:28.678217 562 log.go:181] (0xc000c9cf20) (0xc000b27220) Create stream\nI0816 23:36:28.678293 562 log.go:181] (0xc000c9cf20) (0xc000b27220) Stream added, broadcasting: 1\nI0816 23:36:28.680545 562 log.go:181] (0xc000c9cf20) Reply frame received for 1\nI0816 23:36:28.680604 562 log.go:181] (0xc000c9cf20) (0xc0008275e0) Create stream\nI0816 23:36:28.680626 562 log.go:181] (0xc000c9cf20) (0xc0008275e0) Stream added, broadcasting: 3\nI0816 23:36:28.681608 562 log.go:181] (0xc000c9cf20) Reply frame received for 3\nI0816 23:36:28.681644 562 log.go:181] (0xc000c9cf20) (0xc000d1c0a0) Create stream\nI0816 23:36:28.681654 562 log.go:181] (0xc000c9cf20) (0xc000d1c0a0) Stream added, broadcasting: 5\nI0816 23:36:28.682446 562 log.go:181] (0xc000c9cf20) Reply frame received for 5\nI0816 23:36:28.755420 562 log.go:181] (0xc000c9cf20) Data frame received for 3\nI0816 23:36:28.755442 562 log.go:181] (0xc0008275e0) (3) Data frame handling\nI0816 23:36:28.755456 562 log.go:181] (0xc0008275e0) (3) Data frame sent\nI0816 23:36:28.755463 562 log.go:181] (0xc000c9cf20) Data frame received for 3\nI0816 23:36:28.755468 562 log.go:181] (0xc0008275e0) (3) Data frame handling\nI0816 23:36:28.755637 562 log.go:181] (0xc000c9cf20) Data frame received for 5\nI0816 23:36:28.755655 562 log.go:181] (0xc000d1c0a0) (5) Data frame handling\nI0816 23:36:28.755673 562 log.go:181] (0xc000d1c0a0) (5) Data frame sent\nI0816 23:36:28.755681 562 log.go:181] (0xc000c9cf20) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:36:28.755688 562 log.go:181] (0xc000d1c0a0) (5) Data frame handling\nI0816 23:36:28.757437 562 log.go:181] (0xc000c9cf20) Data frame received for 1\nI0816 23:36:28.757455 562 log.go:181] (0xc000b27220) (1) Data frame handling\nI0816 23:36:28.757469 562 log.go:181] (0xc000b27220) (1) Data frame sent\nI0816 23:36:28.757478 562 log.go:181] (0xc000c9cf20) (0xc000b27220) Stream removed, broadcasting: 1\nI0816 23:36:28.757489 562 log.go:181] (0xc000c9cf20) Go away received\nI0816 23:36:28.757743 562 log.go:181] (0xc000c9cf20) (0xc000b27220) Stream removed, broadcasting: 1\nI0816 23:36:28.757754 562 log.go:181] (0xc000c9cf20) (0xc0008275e0) Stream removed, broadcasting: 3\nI0816 23:36:28.757765 562 log.go:181] (0xc000c9cf20) (0xc000d1c0a0) Stream removed, broadcasting: 5\n" Aug 16 23:36:28.766: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:36:28.766: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:36:28.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:36:29.395: INFO: stderr: "I0816 23:36:28.963991 580 log.go:181] (0xc000521080) (0xc00067d860) Create stream\nI0816 23:36:28.964079 580 log.go:181] (0xc000521080) (0xc00067d860) Stream added, broadcasting: 1\nI0816 23:36:28.967828 580 log.go:181] (0xc000521080) Reply frame received for 1\nI0816 23:36:28.967874 580 log.go:181] (0xc000521080) (0xc000989360) Create stream\nI0816 23:36:28.967889 580 log.go:181] (0xc000521080) (0xc000989360) Stream added, broadcasting: 3\nI0816 23:36:28.969062 580 log.go:181] (0xc000521080) Reply frame received for 3\nI0816 23:36:28.969099 580 log.go:181] (0xc000521080) (0xc00065f220) Create stream\nI0816 23:36:28.969111 580 log.go:181] (0xc000521080) (0xc00065f220) Stream added, broadcasting: 5\nI0816 23:36:28.970197 580 log.go:181] (0xc000521080) Reply frame received for 5\nI0816 23:36:29.023687 580 log.go:181] (0xc000521080) Data frame received for 5\nI0816 23:36:29.023712 580 log.go:181] (0xc00065f220) (5) Data frame handling\nI0816 23:36:29.023725 580 log.go:181] (0xc00065f220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:36:29.383232 580 log.go:181] (0xc000521080) Data frame received for 3\nI0816 23:36:29.383261 580 log.go:181] (0xc000989360) (3) Data frame handling\nI0816 23:36:29.383276 580 log.go:181] (0xc000989360) (3) Data frame sent\nI0816 23:36:29.383284 580 log.go:181] (0xc000521080) Data frame received for 3\nI0816 23:36:29.383290 580 log.go:181] (0xc000989360) (3) Data frame handling\nI0816 23:36:29.383576 580 log.go:181] (0xc000521080) Data frame received for 5\nI0816 23:36:29.383604 580 log.go:181] (0xc00065f220) (5) Data frame handling\nI0816 23:36:29.385438 580 log.go:181] (0xc000521080) Data frame received for 1\nI0816 23:36:29.385455 580 log.go:181] (0xc00067d860) (1) Data frame handling\nI0816 23:36:29.385472 580 log.go:181] (0xc00067d860) (1) Data frame sent\nI0816 23:36:29.385595 580 log.go:181] (0xc000521080) (0xc00067d860) Stream removed, broadcasting: 1\nI0816 23:36:29.385876 580 log.go:181] (0xc000521080) Go away received\nI0816 23:36:29.385932 580 log.go:181] (0xc000521080) (0xc00067d860) Stream removed, broadcasting: 1\nI0816 23:36:29.385947 580 log.go:181] (0xc000521080) (0xc000989360) Stream removed, broadcasting: 3\nI0816 23:36:29.385954 580 log.go:181] (0xc000521080) (0xc00065f220) Stream removed, broadcasting: 5\n" Aug 16 23:36:29.395: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:36:29.395: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:36:29.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 23:36:30.409: INFO: stderr: "I0816 23:36:30.216649 598 log.go:181] (0xc0009c6210) (0xc0009f9860) Create stream\nI0816 23:36:30.216889 598 log.go:181] (0xc0009c6210) (0xc0009f9860) Stream added, broadcasting: 1\nI0816 23:36:30.221076 598 log.go:181] (0xc0009c6210) Reply frame received for 1\nI0816 23:36:30.221112 598 log.go:181] (0xc0009c6210) (0xc00072c8c0) Create stream\nI0816 23:36:30.221120 598 log.go:181] (0xc0009c6210) (0xc00072c8c0) Stream added, broadcasting: 3\nI0816 23:36:30.221854 598 log.go:181] (0xc0009c6210) Reply frame received for 3\nI0816 23:36:30.221886 598 log.go:181] (0xc0009c6210) (0xc00072d900) Create stream\nI0816 23:36:30.221897 598 log.go:181] (0xc0009c6210) (0xc00072d900) Stream added, broadcasting: 5\nI0816 23:36:30.222577 598 log.go:181] (0xc0009c6210) Reply frame received for 5\nI0816 23:36:30.297221 598 log.go:181] (0xc0009c6210) Data frame received for 5\nI0816 23:36:30.297260 598 log.go:181] (0xc00072d900) (5) Data frame handling\nI0816 23:36:30.297283 598 log.go:181] (0xc00072d900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 23:36:30.399773 598 log.go:181] (0xc0009c6210) Data frame received for 3\nI0816 23:36:30.399883 598 log.go:181] (0xc00072c8c0) (3) Data frame handling\nI0816 23:36:30.399921 598 log.go:181] (0xc00072c8c0) (3) Data frame sent\nI0816 23:36:30.399933 598 log.go:181] (0xc0009c6210) Data frame received for 3\nI0816 23:36:30.399940 598 log.go:181] (0xc00072c8c0) (3) Data frame handling\nI0816 23:36:30.400048 598 log.go:181] (0xc0009c6210) Data frame received for 5\nI0816 23:36:30.400074 598 log.go:181] (0xc00072d900) (5) Data frame handling\nI0816 23:36:30.402174 598 log.go:181] (0xc0009c6210) Data frame received for 1\nI0816 23:36:30.402205 598 log.go:181] (0xc0009f9860) (1) Data frame handling\nI0816 23:36:30.402230 598 log.go:181] (0xc0009f9860) (1) Data frame sent\nI0816 23:36:30.402260 598 log.go:181] (0xc0009c6210) (0xc0009f9860) Stream removed, broadcasting: 1\nI0816 23:36:30.402296 598 log.go:181] (0xc0009c6210) Go away received\nI0816 23:36:30.402537 598 log.go:181] (0xc0009c6210) (0xc0009f9860) Stream removed, broadcasting: 1\nI0816 23:36:30.402552 598 log.go:181] (0xc0009c6210) (0xc00072c8c0) Stream removed, broadcasting: 3\nI0816 23:36:30.402558 598 log.go:181] (0xc0009c6210) (0xc00072d900) Stream removed, broadcasting: 5\n" Aug 16 23:36:30.409: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 23:36:30.410: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 23:36:30.410: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:36:30.412: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 16 23:36:41.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:36:41.246: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:36:41.246: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 16 23:36:41.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999922s Aug 16 23:36:42.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.671816854s Aug 16 23:36:43.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.667724968s Aug 16 23:36:44.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.660193732s Aug 16 23:36:46.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.654900218s Aug 16 23:36:47.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.516152207s Aug 16 23:36:48.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.389704248s Aug 16 23:36:49.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.384732424s Aug 16 23:36:50.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.379295173s Aug 16 23:36:51.202: INFO: Verifying statefulset ss doesn't scale past 3 for another 330.457956ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9404 Aug 16 23:36:52.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:36:52.444: INFO: stderr: "I0816 23:36:52.353105 616 log.go:181] (0xc000a4b080) (0xc000c657c0) Create stream\nI0816 23:36:52.353161 616 log.go:181] (0xc000a4b080) (0xc000c657c0) Stream added, broadcasting: 1\nI0816 23:36:52.359796 616 log.go:181] (0xc000a4b080) Reply frame received for 1\nI0816 23:36:52.359852 616 log.go:181] (0xc000a4b080) (0xc000a970e0) Create stream\nI0816 23:36:52.359868 616 log.go:181] (0xc000a4b080) (0xc000a970e0) Stream added, broadcasting: 3\nI0816 23:36:52.360661 616 log.go:181] (0xc000a4b080) Reply frame received for 3\nI0816 23:36:52.360692 616 log.go:181] (0xc000a4b080) (0xc0008ee3c0) Create stream\nI0816 23:36:52.360710 616 log.go:181] (0xc000a4b080) (0xc0008ee3c0) Stream added, broadcasting: 5\nI0816 23:36:52.361626 616 log.go:181] (0xc000a4b080) Reply frame received for 5\nI0816 23:36:52.432887 616 log.go:181] (0xc000a4b080) Data frame received for 5\nI0816 23:36:52.432928 616 log.go:181] (0xc0008ee3c0) (5) Data frame handling\nI0816 23:36:52.432950 616 log.go:181] (0xc0008ee3c0) (5) Data frame sent\nI0816 23:36:52.432964 616 log.go:181] (0xc000a4b080) Data frame received for 5\nI0816 23:36:52.432974 616 log.go:181] (0xc0008ee3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 23:36:52.433026 616 log.go:181] (0xc000a4b080) Data frame received for 3\nI0816 23:36:52.433051 616 log.go:181] (0xc000a970e0) (3) Data frame handling\nI0816 23:36:52.433064 616 log.go:181] (0xc000a970e0) (3) Data frame sent\nI0816 23:36:52.433072 616 log.go:181] (0xc000a4b080) Data frame received for 3\nI0816 23:36:52.433077 616 log.go:181] (0xc000a970e0) (3) Data frame handling\nI0816 23:36:52.434342 616 log.go:181] (0xc000a4b080) Data frame received for 1\nI0816 23:36:52.434364 616 log.go:181] (0xc000c657c0) (1) Data frame handling\nI0816 23:36:52.434372 616 log.go:181] (0xc000c657c0) (1) Data frame sent\nI0816 23:36:52.434382 616 log.go:181] (0xc000a4b080) (0xc000c657c0) Stream removed, broadcasting: 1\nI0816 23:36:52.434523 616 log.go:181] (0xc000a4b080) Go away received\nI0816 23:36:52.434714 616 log.go:181] (0xc000a4b080) (0xc000c657c0) Stream removed, broadcasting: 1\nI0816 23:36:52.434733 616 log.go:181] (0xc000a4b080) (0xc000a970e0) Stream removed, broadcasting: 3\nI0816 23:36:52.434744 616 log.go:181] (0xc000a4b080) (0xc0008ee3c0) Stream removed, broadcasting: 5\n" Aug 16 23:36:52.444: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:36:52.444: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:36:52.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:36:52.687: INFO: stderr: "I0816 23:36:52.603134 634 log.go:181] (0xc000158e70) (0xc000aa45a0) Create stream\nI0816 23:36:52.603178 634 log.go:181] (0xc000158e70) (0xc000aa45a0) Stream added, broadcasting: 1\nI0816 23:36:52.606674 634 log.go:181] (0xc000158e70) Reply frame received for 1\nI0816 23:36:52.606716 634 log.go:181] (0xc000158e70) (0xc00090aa00) Create stream\nI0816 23:36:52.606728 634 log.go:181] (0xc000158e70) (0xc00090aa00) Stream added, broadcasting: 3\nI0816 23:36:52.607896 634 log.go:181] (0xc000158e70) Reply frame received for 3\nI0816 23:36:52.607934 634 log.go:181] (0xc000158e70) (0xc0007a0320) Create stream\nI0816 23:36:52.607944 634 log.go:181] (0xc000158e70) (0xc0007a0320) Stream added, broadcasting: 5\nI0816 23:36:52.609512 634 log.go:181] (0xc000158e70) Reply frame received for 5\nI0816 23:36:52.678600 634 log.go:181] (0xc000158e70) Data frame received for 5\nI0816 23:36:52.678658 634 log.go:181] (0xc0007a0320) (5) Data frame handling\nI0816 23:36:52.678672 634 log.go:181] (0xc0007a0320) (5) Data frame sent\nI0816 23:36:52.678682 634 log.go:181] (0xc000158e70) Data frame received for 5\nI0816 23:36:52.678690 634 log.go:181] (0xc0007a0320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 23:36:52.678735 634 log.go:181] (0xc000158e70) Data frame received for 3\nI0816 23:36:52.678764 634 log.go:181] (0xc00090aa00) (3) Data frame handling\nI0816 23:36:52.678792 634 log.go:181] (0xc00090aa00) (3) Data frame sent\nI0816 23:36:52.678813 634 log.go:181] (0xc000158e70) Data frame received for 3\nI0816 23:36:52.678832 634 log.go:181] (0xc00090aa00) (3) Data frame handling\nI0816 23:36:52.680052 634 log.go:181] (0xc000158e70) Data frame received for 1\nI0816 23:36:52.680080 634 log.go:181] (0xc000aa45a0) (1) Data frame handling\nI0816 23:36:52.680107 634 log.go:181] (0xc000aa45a0) (1) Data frame sent\nI0816 23:36:52.680133 634 log.go:181] (0xc000158e70) (0xc000aa45a0) Stream removed, broadcasting: 1\nI0816 23:36:52.680314 634 log.go:181] (0xc000158e70) Go away received\nI0816 23:36:52.680573 634 log.go:181] (0xc000158e70) (0xc000aa45a0) Stream removed, broadcasting: 1\nI0816 23:36:52.680597 634 log.go:181] (0xc000158e70) (0xc00090aa00) Stream removed, broadcasting: 3\nI0816 23:36:52.680613 634 log.go:181] (0xc000158e70) (0xc0007a0320) Stream removed, broadcasting: 5\n" Aug 16 23:36:52.687: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:36:52.687: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:36:52.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9404 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 23:36:52.899: INFO: stderr: "I0816 23:36:52.817503 652 log.go:181] (0xc000cd3080) (0xc000bbfa40) Create stream\nI0816 23:36:52.817583 652 log.go:181] (0xc000cd3080) (0xc000bbfa40) Stream added, broadcasting: 1\nI0816 23:36:52.822057 652 log.go:181] (0xc000cd3080) Reply frame received for 1\nI0816 23:36:52.822111 652 log.go:181] (0xc000cd3080) (0xc0002f65a0) Create stream\nI0816 23:36:52.822126 652 log.go:181] (0xc000cd3080) (0xc0002f65a0) Stream added, broadcasting: 3\nI0816 23:36:52.822949 652 log.go:181] (0xc000cd3080) Reply frame received for 3\nI0816 23:36:52.822984 652 log.go:181] (0xc000cd3080) (0xc000bab0e0) Create stream\nI0816 23:36:52.822997 652 log.go:181] (0xc000cd3080) (0xc000bab0e0) Stream added, broadcasting: 5\nI0816 23:36:52.823988 652 log.go:181] (0xc000cd3080) Reply frame received for 5\nI0816 23:36:52.889646 652 log.go:181] (0xc000cd3080) Data frame received for 3\nI0816 23:36:52.889686 652 log.go:181] (0xc0002f65a0) (3) Data frame handling\nI0816 23:36:52.889698 652 log.go:181] (0xc0002f65a0) (3) Data frame sent\nI0816 23:36:52.889707 652 log.go:181] (0xc000cd3080) Data frame received for 3\nI0816 23:36:52.889716 652 log.go:181] (0xc0002f65a0) (3) Data frame handling\nI0816 23:36:52.889752 652 log.go:181] (0xc000cd3080) Data frame received for 5\nI0816 23:36:52.889774 652 log.go:181] (0xc000bab0e0) (5) Data frame handling\nI0816 23:36:52.889816 652 log.go:181] (0xc000bab0e0) (5) Data frame sent\nI0816 23:36:52.889843 652 log.go:181] (0xc000cd3080) Data frame received for 5\nI0816 23:36:52.889864 652 log.go:181] (0xc000bab0e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 23:36:52.891230 652 log.go:181] (0xc000cd3080) Data frame received for 1\nI0816 23:36:52.891245 652 log.go:181] (0xc000bbfa40) (1) Data frame handling\nI0816 23:36:52.891261 652 log.go:181] (0xc000bbfa40) (1) Data frame sent\nI0816 23:36:52.891273 652 log.go:181] (0xc000cd3080) (0xc000bbfa40) Stream removed, broadcasting: 1\nI0816 23:36:52.891292 652 log.go:181] (0xc000cd3080) Go away received\nI0816 23:36:52.891622 652 log.go:181] (0xc000cd3080) (0xc000bbfa40) Stream removed, broadcasting: 1\nI0816 23:36:52.891641 652 log.go:181] (0xc000cd3080) (0xc0002f65a0) Stream removed, broadcasting: 3\nI0816 23:36:52.891651 652 log.go:181] (0xc000cd3080) (0xc000bab0e0) Stream removed, broadcasting: 5\n" Aug 16 23:36:52.899: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 23:36:52.899: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 23:36:52.899: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 16 23:37:22.921: INFO: Deleting all statefulset in ns statefulset-9404 Aug 16 23:37:22.925: INFO: Scaling statefulset ss to 0 Aug 16 23:37:22.934: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 23:37:22.936: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:37:22.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9404" for this suite. • [SLOW TEST:115.837 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":294,"completed":42,"skipped":781,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:37:22.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 16 23:37:30.641: INFO: 0 pods remaining Aug 16 23:37:30.641: INFO: 0 pods has nil DeletionTimestamp Aug 16 23:37:30.641: INFO: Aug 16 23:37:31.679: INFO: 0 pods remaining Aug 16 23:37:31.679: INFO: 0 pods has nil DeletionTimestamp Aug 16 23:37:31.679: INFO: STEP: Gathering metrics W0816 23:37:32.686414 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 16 23:38:36.250: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:38:36.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6814" for this suite. • [SLOW TEST:73.351 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":294,"completed":43,"skipped":789,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:38:36.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-739148b9-7fac-481d-9988-c0de2fa53730 STEP: Creating configMap with name cm-test-opt-upd-a35cdf0b-a838-4acb-8a13-ca631271b43b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-739148b9-7fac-481d-9988-c0de2fa53730 STEP: Updating configmap cm-test-opt-upd-a35cdf0b-a838-4acb-8a13-ca631271b43b STEP: Creating configMap with name cm-test-opt-create-a72c070a-04c8-4881-b756-ae63cef614c5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:39:56.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5255" for this suite. • [SLOW TEST:79.998 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":44,"skipped":794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:39:56.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:39:57.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-580" for this suite. STEP: Destroying namespace "nspatchtest-a4a65a26-458e-4780-b69b-2691af4da74a-7304" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":294,"completed":45,"skipped":909,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:39:57.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 16 23:39:57.969: INFO: Created pod &Pod{ObjectMeta:{dns-455 dns-455 /api/v1/namespaces/dns-455/pods/dns-455 77318b6d-6fee-4448-a056-aa3ed8e48efc 532863 0 2020-08-16 23:39:57 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-16 23:39:57 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rrgfd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rrgfd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rrgfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 16 23:39:57.994: INFO: The status of Pod dns-455 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:40:00.140: INFO: The status of Pod dns-455 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:40:02.398: INFO: The status of Pod dns-455 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:40:04.018: INFO: The status of Pod dns-455 is Pending, waiting for it to be Running (with Ready = true) Aug 16 23:40:05.998: INFO: The status of Pod dns-455 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 16 23:40:05.998: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-455 PodName:dns-455 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:40:05.998: INFO: >>> kubeConfig: /root/.kube/config I0816 23:40:06.035939 7 log.go:181] (0xc003383760) (0xc00310af00) Create stream I0816 23:40:06.035973 7 log.go:181] (0xc003383760) (0xc00310af00) Stream added, broadcasting: 1 I0816 23:40:06.038553 7 log.go:181] (0xc003383760) Reply frame received for 1 I0816 23:40:06.038627 7 log.go:181] (0xc003383760) (0xc0036a6320) Create stream I0816 23:40:06.038653 7 log.go:181] (0xc003383760) (0xc0036a6320) Stream added, broadcasting: 3 I0816 23:40:06.039608 7 log.go:181] (0xc003383760) Reply frame received for 3 I0816 23:40:06.039641 7 log.go:181] (0xc003383760) (0xc003751360) Create stream I0816 23:40:06.039653 7 log.go:181] (0xc003383760) (0xc003751360) Stream added, broadcasting: 5 I0816 23:40:06.040503 7 log.go:181] (0xc003383760) Reply frame received for 5 I0816 23:40:06.109498 7 log.go:181] (0xc003383760) Data frame received for 3 I0816 23:40:06.109521 7 log.go:181] (0xc0036a6320) (3) Data frame handling I0816 23:40:06.109532 7 log.go:181] (0xc0036a6320) (3) Data frame sent I0816 23:40:06.110587 7 log.go:181] (0xc003383760) Data frame received for 3 I0816 23:40:06.110606 7 log.go:181] (0xc0036a6320) (3) Data frame handling I0816 23:40:06.110631 7 log.go:181] (0xc003383760) Data frame received for 5 I0816 23:40:06.110650 7 log.go:181] (0xc003751360) (5) Data frame handling I0816 23:40:06.114882 7 log.go:181] (0xc003383760) Data frame received for 1 I0816 23:40:06.114908 7 log.go:181] (0xc00310af00) (1) Data frame handling I0816 23:40:06.114924 7 log.go:181] (0xc00310af00) (1) Data frame sent I0816 23:40:06.114937 7 log.go:181] (0xc003383760) (0xc00310af00) Stream removed, broadcasting: 1 I0816 23:40:06.114947 7 log.go:181] (0xc003383760) Go away received I0816 23:40:06.115105 7 log.go:181] (0xc003383760) (0xc00310af00) Stream removed, broadcasting: 1 I0816 23:40:06.115119 7 log.go:181] (0xc003383760) (0xc0036a6320) Stream removed, broadcasting: 3 I0816 23:40:06.115124 7 log.go:181] (0xc003383760) (0xc003751360) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 16 23:40:06.115: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-455 PodName:dns-455 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 23:40:06.115: INFO: >>> kubeConfig: /root/.kube/config I0816 23:40:06.136594 7 log.go:181] (0xc003383d90) (0xc000175400) Create stream I0816 23:40:06.136612 7 log.go:181] (0xc003383d90) (0xc000175400) Stream added, broadcasting: 1 I0816 23:40:06.138263 7 log.go:181] (0xc003383d90) Reply frame received for 1 I0816 23:40:06.138297 7 log.go:181] (0xc003383d90) (0xc0035f75e0) Create stream I0816 23:40:06.138305 7 log.go:181] (0xc003383d90) (0xc0035f75e0) Stream added, broadcasting: 3 I0816 23:40:06.139114 7 log.go:181] (0xc003383d90) Reply frame received for 3 I0816 23:40:06.139138 7 log.go:181] (0xc003383d90) (0xc003751400) Create stream I0816 23:40:06.139146 7 log.go:181] (0xc003383d90) (0xc003751400) Stream added, broadcasting: 5 I0816 23:40:06.139857 7 log.go:181] (0xc003383d90) Reply frame received for 5 I0816 23:40:06.226483 7 log.go:181] (0xc003383d90) Data frame received for 3 I0816 23:40:06.226509 7 log.go:181] (0xc0035f75e0) (3) Data frame handling I0816 23:40:06.226526 7 log.go:181] (0xc0035f75e0) (3) Data frame sent I0816 23:40:06.227724 7 log.go:181] (0xc003383d90) Data frame received for 5 I0816 23:40:06.227807 7 log.go:181] (0xc003751400) (5) Data frame handling I0816 23:40:06.227874 7 log.go:181] (0xc003383d90) Data frame received for 3 I0816 23:40:06.227894 7 log.go:181] (0xc0035f75e0) (3) Data frame handling I0816 23:40:06.229613 7 log.go:181] (0xc003383d90) Data frame received for 1 I0816 23:40:06.229664 7 log.go:181] (0xc000175400) (1) Data frame handling I0816 23:40:06.229705 7 log.go:181] (0xc000175400) (1) Data frame sent I0816 23:40:06.229736 7 log.go:181] (0xc003383d90) (0xc000175400) Stream removed, broadcasting: 1 I0816 23:40:06.229757 7 log.go:181] (0xc003383d90) Go away received I0816 23:40:06.229846 7 log.go:181] (0xc003383d90) (0xc000175400) Stream removed, broadcasting: 1 I0816 23:40:06.229863 7 log.go:181] (0xc003383d90) (0xc0035f75e0) Stream removed, broadcasting: 3 I0816 23:40:06.229874 7 log.go:181] (0xc003383d90) (0xc003751400) Stream removed, broadcasting: 5 Aug 16 23:40:06.229: INFO: Deleting pod dns-455... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:40:06.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-455" for this suite. • [SLOW TEST:8.421 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":294,"completed":46,"skipped":914,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:40:06.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:40:06.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878" in namespace "downward-api-223" to be "Succeeded or Failed" Aug 16 23:40:07.022: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878": Phase="Pending", Reason="", readiness=false. Elapsed: 198.571378ms Aug 16 23:40:09.068: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244904872s Aug 16 23:40:11.254: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431403099s Aug 16 23:40:13.314: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878": Phase="Pending", Reason="", readiness=false. Elapsed: 6.491064797s Aug 16 23:40:15.319: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.495820384s STEP: Saw pod success Aug 16 23:40:15.319: INFO: Pod "downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878" satisfied condition "Succeeded or Failed" Aug 16 23:40:15.322: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878 container client-container: STEP: delete the pod Aug 16 23:40:15.555: INFO: Waiting for pod downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878 to disappear Aug 16 23:40:15.576: INFO: Pod downwardapi-volume-a4bbc033-8962-4999-a996-ee84f5047878 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:40:15.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-223" for this suite. • [SLOW TEST:9.291 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":47,"skipped":920,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:40:15.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 16 23:40:25.245: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3248 pod-service-account-38b9d044-5ea4-4f8f-8b2a-8bd78f32e26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 16 23:40:25.993: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3248 pod-service-account-38b9d044-5ea4-4f8f-8b2a-8bd78f32e26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 16 23:40:26.189: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3248 pod-service-account-38b9d044-5ea4-4f8f-8b2a-8bd78f32e26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:40:26.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3248" for this suite. • [SLOW TEST:10.993 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":294,"completed":48,"skipped":928,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:40:26.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 16 23:40:36.438: INFO: Successfully updated pod "labelsupdate95640d2b-7ad3-4dda-bc8c-2e58f7f1f9ca" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:40:38.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9457" for this suite. • [SLOW TEST:12.618 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":49,"skipped":932,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:40:39.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:40:40.130: INFO: Creating deployment "test-recreate-deployment" Aug 16 23:40:40.134: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 16 23:40:40.715: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 16 23:40:42.768: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 16 23:40:42.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7589bf48bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:40:45.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218040, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7589bf48bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:40:46.949: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 16 23:40:47.021: INFO: Updating deployment test-recreate-deployment Aug 16 23:40:47.021: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 16 23:40:48.162: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-283 /apis/apps/v1/namespaces/deployment-283/deployments/test-recreate-deployment a2b5cf00-cb07-4801-84c3-e77180dc7a82 533226 2 2020-08-16 23:40:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-16 23:40:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:40:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00245e028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-16 23:40:48 +0000 UTC,LastTransitionTime:2020-08-16 23:40:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-16 23:40:48 +0000 UTC,LastTransitionTime:2020-08-16 23:40:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 16 23:40:48.180: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-283 /apis/apps/v1/namespaces/deployment-283/replicasets/test-recreate-deployment-f79dd4667 bdb633e9-72f5-452b-b406-7feaf37b545f 533224 1 2020-08-16 23:40:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a2b5cf00-cb07-4801-84c3-e77180dc7a82 0xc00245e5a0 0xc00245e5a1}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:40:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2b5cf00-cb07-4801-84c3-e77180dc7a82\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00245e618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:40:48.180: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 16 23:40:48.180: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7589bf48bb deployment-283 /apis/apps/v1/namespaces/deployment-283/replicasets/test-recreate-deployment-7589bf48bb 3807121d-f776-43f0-beb6-815533d68acc 533213 2 2020-08-16 23:40:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:7589bf48bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a2b5cf00-cb07-4801-84c3-e77180dc7a82 0xc00245e487 0xc00245e488}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:40:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2b5cf00-cb07-4801-84c3-e77180dc7a82\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7589bf48bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7589bf48bb] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00245e538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:40:48.572: INFO: Pod "test-recreate-deployment-f79dd4667-zrqdw" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-zrqdw test-recreate-deployment-f79dd4667- deployment-283 /api/v1/namespaces/deployment-283/pods/test-recreate-deployment-f79dd4667-zrqdw c933f15f-d025-44ad-ae8d-3e5a63f436ed 533221 0 2020-08-16 23:40:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 bdb633e9-72f5-452b-b406-7feaf37b545f 0xc00245eb30 0xc00245eb31}] [] [{kube-controller-manager Update v1 2020-08-16 23:40:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdb633e9-72f5-452b-b406-7feaf37b545f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cw2js,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cw2js,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cw2js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:40:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:40:48.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-283" for this suite. • [SLOW TEST:9.411 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":50,"skipped":949,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:40:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-vrdp STEP: Creating a pod to test atomic-volume-subpath Aug 16 23:40:48.789: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vrdp" in namespace "subpath-3544" to be "Succeeded or Failed" Aug 16 23:40:48.925: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Pending", Reason="", readiness=false. Elapsed: 135.590201ms Aug 16 23:40:51.146: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356883236s Aug 16 23:40:53.692: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.902564603s Aug 16 23:40:55.745: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955460793s Aug 16 23:40:57.748: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 8.959334868s Aug 16 23:40:59.752: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 10.962507141s Aug 16 23:41:01.754: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 12.965086225s Aug 16 23:41:03.758: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 14.96888515s Aug 16 23:41:05.763: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 16.973566472s Aug 16 23:41:08.338: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 19.549317982s Aug 16 23:41:10.346: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 21.556631433s Aug 16 23:41:12.350: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 23.56037631s Aug 16 23:41:14.352: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 25.563245232s Aug 16 23:41:16.356: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Running", Reason="", readiness=true. Elapsed: 27.566980391s Aug 16 23:41:18.360: INFO: Pod "pod-subpath-test-secret-vrdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.571003666s STEP: Saw pod success Aug 16 23:41:18.360: INFO: Pod "pod-subpath-test-secret-vrdp" satisfied condition "Succeeded or Failed" Aug 16 23:41:18.363: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-vrdp container test-container-subpath-secret-vrdp: STEP: delete the pod Aug 16 23:41:18.586: INFO: Waiting for pod pod-subpath-test-secret-vrdp to disappear Aug 16 23:41:18.775: INFO: Pod pod-subpath-test-secret-vrdp no longer exists STEP: Deleting pod pod-subpath-test-secret-vrdp Aug 16 23:41:18.775: INFO: Deleting pod "pod-subpath-test-secret-vrdp" in namespace "subpath-3544" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:41:18.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3544" for this suite. • [SLOW TEST:30.164 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":294,"completed":51,"skipped":955,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:41:18.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:41:19.174: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 16 23:41:21.283: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:41:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7295" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":294,"completed":52,"skipped":967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:41:22.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:41:24.968: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:41:27.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218085, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:41:29.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218085, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218084, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:41:32.650: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:41:32.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1643" for this suite. STEP: Destroying namespace "webhook-1643-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.172 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":294,"completed":53,"skipped":992,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:41:33.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:41:36.285: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:41:38.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218095, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:41:40.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218095, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:41:42.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218096, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218095, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:41:45.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:41:57.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4507" for this suite. STEP: Destroying namespace "webhook-4507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.713 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":294,"completed":54,"skipped":1001,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:41:57.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4 Aug 16 23:41:57.795: INFO: Pod name my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4: Found 0 pods out of 1 Aug 16 23:42:02.812: INFO: Pod name my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4: Found 1 pods out of 1 Aug 16 23:42:02.812: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4" are running Aug 16 23:42:02.821: INFO: Pod "my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4-ztsrm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:41:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:42:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:42:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:41:57 +0000 UTC Reason: Message:}]) Aug 16 23:42:02.822: INFO: Trying to dial the pod Aug 16 23:42:07.835: INFO: Controller my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4: Got expected result from replica 1 [my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4-ztsrm]: "my-hostname-basic-06adb102-915f-4890-86cd-73f3bafc2ad4-ztsrm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:07.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2995" for this suite. • [SLOW TEST:10.185 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":55,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:07.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8693c0d6-41ec-4d12-af1e-7862d1e11a42 STEP: Creating a pod to test consume configMaps Aug 16 23:42:07.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77" in namespace "configmap-9316" to be "Succeeded or Failed" Aug 16 23:42:08.019: INFO: Pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77": Phase="Pending", Reason="", readiness=false. Elapsed: 37.04094ms Aug 16 23:42:10.189: INFO: Pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207139991s Aug 16 23:42:12.398: INFO: Pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416816649s Aug 16 23:42:14.402: INFO: Pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.420693082s STEP: Saw pod success Aug 16 23:42:14.402: INFO: Pod "pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77" satisfied condition "Succeeded or Failed" Aug 16 23:42:14.406: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77 container configmap-volume-test: STEP: delete the pod Aug 16 23:42:14.558: INFO: Waiting for pod pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77 to disappear Aug 16 23:42:14.606: INFO: Pod pod-configmaps-adb8a20a-36c2-4deb-94e9-846e2e5e8c77 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9316" for this suite. • [SLOW TEST:6.770 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":56,"skipped":1044,"failed":0} [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:14.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9695.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9695.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 23:42:25.305: INFO: DNS probes using dns-9695/dns-test-e783698b-7591-443e-af69-82084cc93a2d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:25.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9695" for this suite. • [SLOW TEST:11.265 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":294,"completed":57,"skipped":1044,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:25.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 16 23:42:26.285: INFO: Waiting up to 5m0s for pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca" in namespace "var-expansion-5487" to be "Succeeded or Failed" Aug 16 23:42:26.312: INFO: Pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca": Phase="Pending", Reason="", readiness=false. Elapsed: 26.877574ms Aug 16 23:42:28.603: INFO: Pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317263798s Aug 16 23:42:30.606: INFO: Pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320910523s Aug 16 23:42:32.609: INFO: Pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323959943s STEP: Saw pod success Aug 16 23:42:32.610: INFO: Pod "var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca" satisfied condition "Succeeded or Failed" Aug 16 23:42:32.611: INFO: Trying to get logs from node latest-worker pod var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca container dapi-container: STEP: delete the pod Aug 16 23:42:32.687: INFO: Waiting for pod var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca to disappear Aug 16 23:42:32.695: INFO: Pod var-expansion-97c7656f-c6c6-4bae-b5a7-5065dc5c95ca no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:32.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5487" for this suite. • [SLOW TEST:6.822 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":294,"completed":58,"skipped":1045,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:32.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 16 23:42:32.775: INFO: Waiting up to 5m0s for pod "downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85" in namespace "downward-api-7710" to be "Succeeded or Failed" Aug 16 23:42:32.812: INFO: Pod "downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85": Phase="Pending", Reason="", readiness=false. Elapsed: 36.93281ms Aug 16 23:42:34.848: INFO: Pod "downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072513208s Aug 16 23:42:36.851: INFO: Pod "downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075626897s STEP: Saw pod success Aug 16 23:42:36.851: INFO: Pod "downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85" satisfied condition "Succeeded or Failed" Aug 16 23:42:36.853: INFO: Trying to get logs from node latest-worker2 pod downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85 container dapi-container: STEP: delete the pod Aug 16 23:42:36.886: INFO: Waiting for pod downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85 to disappear Aug 16 23:42:36.910: INFO: Pod downward-api-95772b6d-81bd-4efe-986a-6cbe023d8e85 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:36.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7710" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":294,"completed":59,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:36.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:37.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4062" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":294,"completed":60,"skipped":1065,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:37.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:42:37.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37" in namespace "downward-api-5986" to be "Succeeded or Failed" Aug 16 23:42:37.873: INFO: Pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37": Phase="Pending", Reason="", readiness=false. Elapsed: 199.82018ms Aug 16 23:42:39.876: INFO: Pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203388787s Aug 16 23:42:41.993: INFO: Pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320244746s Aug 16 23:42:43.997: INFO: Pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323836739s STEP: Saw pod success Aug 16 23:42:43.997: INFO: Pod "downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37" satisfied condition "Succeeded or Failed" Aug 16 23:42:43.999: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37 container client-container: STEP: delete the pod Aug 16 23:42:44.344: INFO: Waiting for pod downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37 to disappear Aug 16 23:42:44.349: INFO: Pod downwardapi-volume-67f4133d-809d-44fe-a2a3-4caf8aefad37 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:44.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5986" for this suite. • [SLOW TEST:6.957 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":61,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:44.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 16 23:42:44.508: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7797 /api/v1/namespaces/watch-7797/configmaps/e2e-watch-test-watch-closed c05097d6-717f-485d-b4fe-ed80ea3fd7ad 534160 0 2020-08-16 23:42:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-16 23:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 16 23:42:44.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7797 /api/v1/namespaces/watch-7797/configmaps/e2e-watch-test-watch-closed c05097d6-717f-485d-b4fe-ed80ea3fd7ad 534161 0 2020-08-16 23:42:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-16 23:42:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 16 23:42:44.577: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7797 /api/v1/namespaces/watch-7797/configmaps/e2e-watch-test-watch-closed c05097d6-717f-485d-b4fe-ed80ea3fd7ad 534162 0 2020-08-16 23:42:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-16 23:42:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 16 23:42:44.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7797 /api/v1/namespaces/watch-7797/configmaps/e2e-watch-test-watch-closed c05097d6-717f-485d-b4fe-ed80ea3fd7ad 534163 0 2020-08-16 23:42:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-16 23:42:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:42:44.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7797" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":294,"completed":62,"skipped":1105,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:42:44.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0816 23:43:25.273690 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 16 23:44:27.290: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 16 23:44:27.290: INFO: Deleting pod "simpletest.rc-4cbwz" in namespace "gc-3314" Aug 16 23:44:27.327: INFO: Deleting pod "simpletest.rc-4vdj6" in namespace "gc-3314" Aug 16 23:44:27.499: INFO: Deleting pod "simpletest.rc-b5kbz" in namespace "gc-3314" Aug 16 23:44:27.950: INFO: Deleting pod "simpletest.rc-hf9kh" in namespace "gc-3314" Aug 16 23:44:28.075: INFO: Deleting pod "simpletest.rc-k7v8f" in namespace "gc-3314" Aug 16 23:44:28.403: INFO: Deleting pod "simpletest.rc-phppw" in namespace "gc-3314" Aug 16 23:44:28.957: INFO: Deleting pod "simpletest.rc-pr47j" in namespace "gc-3314" Aug 16 23:44:29.112: INFO: Deleting pod "simpletest.rc-s2zg4" in namespace "gc-3314" Aug 16 23:44:29.499: INFO: Deleting pod "simpletest.rc-xqtsm" in namespace "gc-3314" Aug 16 23:44:30.064: INFO: Deleting pod "simpletest.rc-zrlnp" in namespace "gc-3314" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:44:30.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3314" for this suite. • [SLOW TEST:106.519 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":294,"completed":63,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:44:31.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 16 23:44:40.763: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:44:40.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6827" for this suite. • [SLOW TEST:9.939 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":294,"completed":64,"skipped":1183,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:44:41.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-n72d STEP: Creating a pod to test atomic-volume-subpath Aug 16 23:44:41.131: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n72d" in namespace "subpath-2620" to be "Succeeded or Failed" Aug 16 23:44:41.170: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.206681ms Aug 16 23:44:43.196: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065183147s Aug 16 23:44:45.221: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089873003s Aug 16 23:44:47.224: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 6.093521399s Aug 16 23:44:49.227: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 8.095733306s Aug 16 23:44:51.231: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 10.099886514s Aug 16 23:44:53.235: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 12.10382521s Aug 16 23:44:55.239: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 14.107781478s Aug 16 23:44:57.243: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 16.112145063s Aug 16 23:44:59.248: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 18.116620363s Aug 16 23:45:01.252: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 20.121427962s Aug 16 23:45:03.257: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 22.125578295s Aug 16 23:45:05.261: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Running", Reason="", readiness=true. Elapsed: 24.129755766s Aug 16 23:45:07.265: INFO: Pod "pod-subpath-test-configmap-n72d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.134071716s STEP: Saw pod success Aug 16 23:45:07.265: INFO: Pod "pod-subpath-test-configmap-n72d" satisfied condition "Succeeded or Failed" Aug 16 23:45:07.268: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-n72d container test-container-subpath-configmap-n72d: STEP: delete the pod Aug 16 23:45:07.581: INFO: Waiting for pod pod-subpath-test-configmap-n72d to disappear Aug 16 23:45:07.592: INFO: Pod pod-subpath-test-configmap-n72d no longer exists STEP: Deleting pod pod-subpath-test-configmap-n72d Aug 16 23:45:07.592: INFO: Deleting pod "pod-subpath-test-configmap-n72d" in namespace "subpath-2620" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:07.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2620" for this suite. • [SLOW TEST:26.557 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":294,"completed":65,"skipped":1195,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:07.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 16 23:45:07.827: INFO: Waiting up to 5m0s for pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14" in namespace "emptydir-5162" to be "Succeeded or Failed" Aug 16 23:45:07.946: INFO: Pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14": Phase="Pending", Reason="", readiness=false. Elapsed: 119.08695ms Aug 16 23:45:09.970: INFO: Pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143152813s Aug 16 23:45:11.993: INFO: Pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166081947s Aug 16 23:45:13.999: INFO: Pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17128731s STEP: Saw pod success Aug 16 23:45:13.999: INFO: Pod "pod-a363a41c-d8ab-432a-8c82-7d8438e64f14" satisfied condition "Succeeded or Failed" Aug 16 23:45:14.000: INFO: Trying to get logs from node latest-worker pod pod-a363a41c-d8ab-432a-8c82-7d8438e64f14 container test-container: STEP: delete the pod Aug 16 23:45:14.026: INFO: Waiting for pod pod-a363a41c-d8ab-432a-8c82-7d8438e64f14 to disappear Aug 16 23:45:14.041: INFO: Pod pod-a363a41c-d8ab-432a-8c82-7d8438e64f14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5162" for this suite. • [SLOW TEST:6.444 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":66,"skipped":1202,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:14.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:14.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2995" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":294,"completed":67,"skipped":1217,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:14.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-3f071bf3-5eb3-4534-a7ac-4e318d124436 STEP: Creating a pod to test consume configMaps Aug 16 23:45:14.349: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c" in namespace "projected-7788" to be "Succeeded or Failed" Aug 16 23:45:14.352: INFO: Pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.317362ms Aug 16 23:45:16.355: INFO: Pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005929622s Aug 16 23:45:18.396: INFO: Pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046794718s Aug 16 23:45:20.399: INFO: Pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05035881s STEP: Saw pod success Aug 16 23:45:20.399: INFO: Pod "pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c" satisfied condition "Succeeded or Failed" Aug 16 23:45:20.402: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c container projected-configmap-volume-test: STEP: delete the pod Aug 16 23:45:21.512: INFO: Waiting for pod pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c to disappear Aug 16 23:45:21.550: INFO: Pod pod-projected-configmaps-13f3a090-6ba0-4447-8c5f-bfbe3162726c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:21.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7788" for this suite. • [SLOW TEST:7.541 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":68,"skipped":1226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:21.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 16 23:45:22.026: INFO: Waiting up to 5m0s for pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb" in namespace "emptydir-2353" to be "Succeeded or Failed" Aug 16 23:45:22.065: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 38.210148ms Aug 16 23:45:24.131: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1043876s Aug 16 23:45:26.134: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10785677s Aug 16 23:45:28.139: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112167966s Aug 16 23:45:30.143: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116710163s STEP: Saw pod success Aug 16 23:45:30.143: INFO: Pod "pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb" satisfied condition "Succeeded or Failed" Aug 16 23:45:30.147: INFO: Trying to get logs from node latest-worker2 pod pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb container test-container: STEP: delete the pod Aug 16 23:45:30.207: INFO: Waiting for pod pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb to disappear Aug 16 23:45:30.221: INFO: Pod pod-9c610bb1-a42a-4d89-b9b6-8dc4c3fd16fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:30.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2353" for this suite. • [SLOW TEST:8.552 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":69,"skipped":1256,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:30.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a4e3f2d5-cdce-4d9d-8b8d-2fcbc53cc781 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a4e3f2d5-cdce-4d9d-8b8d-2fcbc53cc781 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:38.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-841" for this suite. • [SLOW TEST:8.468 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":70,"skipped":1269,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:38.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 16 23:45:39.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1840' Aug 16 23:45:39.855: INFO: stderr: "" Aug 16 23:45:39.855: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 16 23:45:40.860: INFO: Selector matched 1 pods for map[app:agnhost] Aug 16 23:45:40.860: INFO: Found 0 / 1 Aug 16 23:45:41.860: INFO: Selector matched 1 pods for map[app:agnhost] Aug 16 23:45:41.860: INFO: Found 0 / 1 Aug 16 23:45:42.859: INFO: Selector matched 1 pods for map[app:agnhost] Aug 16 23:45:42.859: INFO: Found 1 / 1 Aug 16 23:45:42.859: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 16 23:45:42.863: INFO: Selector matched 1 pods for map[app:agnhost] Aug 16 23:45:42.863: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 16 23:45:42.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config patch pod agnhost-primary-7p8nh --namespace=kubectl-1840 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 16 23:45:42.971: INFO: stderr: "" Aug 16 23:45:42.971: INFO: stdout: "pod/agnhost-primary-7p8nh patched\n" STEP: checking annotations Aug 16 23:45:42.981: INFO: Selector matched 1 pods for map[app:agnhost] Aug 16 23:45:42.981: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1840" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":294,"completed":71,"skipped":1283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:42.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 16 23:45:43.056: INFO: Waiting up to 5m0s for pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361" in namespace "downward-api-3914" to be "Succeeded or Failed" Aug 16 23:45:43.077: INFO: Pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361": Phase="Pending", Reason="", readiness=false. Elapsed: 20.810143ms Aug 16 23:45:45.081: INFO: Pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024883307s Aug 16 23:45:47.085: INFO: Pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028337449s Aug 16 23:45:49.088: INFO: Pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031973005s STEP: Saw pod success Aug 16 23:45:49.088: INFO: Pod "downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361" satisfied condition "Succeeded or Failed" Aug 16 23:45:49.091: INFO: Trying to get logs from node latest-worker pod downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361 container dapi-container: STEP: delete the pod Aug 16 23:45:49.129: INFO: Waiting for pod downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361 to disappear Aug 16 23:45:49.162: INFO: Pod downward-api-7fabd4fa-261e-4ac5-8335-374be45a1361 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:49.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3914" for this suite. • [SLOW TEST:6.180 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":294,"completed":72,"skipped":1330,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:49.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-941.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-941.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-941.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-941.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 23:45:57.347: INFO: DNS probes using dns-941/dns-test-461d90ee-15fd-4efb-8236-ad1284d53da4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:57.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-941" for this suite. • [SLOW TEST:9.185 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":294,"completed":73,"skipped":1335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:45:58.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:45:59.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9383" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":294,"completed":74,"skipped":1375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:00.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:46:01.786: INFO: Creating ReplicaSet my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda Aug 16 23:46:02.386: INFO: Pod name my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda: Found 0 pods out of 1 Aug 16 23:46:07.389: INFO: Pod name my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda: Found 1 pods out of 1 Aug 16 23:46:07.389: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda" is running Aug 16 23:46:07.391: INFO: Pod "my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda-hxzz9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:46:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:46:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:46:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 23:46:02 +0000 UTC Reason: Message:}]) Aug 16 23:46:07.392: INFO: Trying to dial the pod Aug 16 23:46:12.400: INFO: Controller my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda: Got expected result from replica 1 [my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda-hxzz9]: "my-hostname-basic-8f49f3e0-4b69-48b5-b2f6-4bb44998bfda-hxzz9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:46:12.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8175" for this suite. • [SLOW TEST:11.637 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":75,"skipped":1401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:12.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:46:20.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7898" for this suite. • [SLOW TEST:8.354 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":294,"completed":76,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:20.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 16 23:46:27.681: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:46:28.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2560" for this suite. • [SLOW TEST:7.629 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":77,"skipped":1469,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:28.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:46:28.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6" in namespace "projected-1472" to be "Succeeded or Failed" Aug 16 23:46:29.068: INFO: Pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6": Phase="Pending", Reason="", readiness=false. Elapsed: 95.660246ms Aug 16 23:46:31.071: INFO: Pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098528792s Aug 16 23:46:33.075: INFO: Pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6": Phase="Running", Reason="", readiness=true. Elapsed: 4.101835221s Aug 16 23:46:35.078: INFO: Pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105217245s STEP: Saw pod success Aug 16 23:46:35.078: INFO: Pod "downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6" satisfied condition "Succeeded or Failed" Aug 16 23:46:35.081: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6 container client-container: STEP: delete the pod Aug 16 23:46:35.128: INFO: Waiting for pod downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6 to disappear Aug 16 23:46:35.131: INFO: Pod downwardapi-volume-d4fe1d33-3e3c-48d1-9751-774505e01ae6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:46:35.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1472" for this suite. • [SLOW TEST:6.745 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":78,"skipped":1485,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:35.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:46:35.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config version' Aug 16 23:46:35.366: INFO: stderr: "" Aug 16 23:46:35.366: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.0\", GitCommit:\"82baa26905c94398a0d19e1b1ecf54eb8acb6029\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T20:49:22Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:46:35.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4737" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":294,"completed":79,"skipped":1486,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:46:35.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:46:35.497: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 16 23:46:40.500: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 16 23:46:40.500: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 16 23:46:42.503: INFO: Creating deployment "test-rollover-deployment" Aug 16 23:46:42.517: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 16 23:46:44.522: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 16 23:46:44.526: INFO: Ensure that both replica sets have 1 created replica Aug 16 23:46:44.531: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 16 23:46:44.536: INFO: Updating deployment test-rollover-deployment Aug 16 23:46:44.536: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 16 23:46:46.708: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 16 23:46:46.713: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 16 23:46:46.717: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:46.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218404, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:48.725: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:48.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218404, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:50.723: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:50.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218409, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:52.723: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:52.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218409, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:54.725: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:54.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218409, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:56.801: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:56.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218409, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:46:58.802: INFO: all replica sets need to contain the pod-template-hash label Aug 16 23:46:58.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218409, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:47:01.206: INFO: Aug 16 23:47:01.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218419, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733218402, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:47:02.723: INFO: Aug 16 23:47:02.723: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 16 23:47:02.729: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7957 /apis/apps/v1/namespaces/deployment-7957/deployments/test-rollover-deployment b2152f9d-df63-4ab3-a511-32c6d68bb8d7 535631 2 2020-08-16 23:46:42 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-16 23:46:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000855648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-16 23:46:42 +0000 UTC,LastTransitionTime:2020-08-16 23:46:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6f68b9c6f9" has successfully progressed.,LastUpdateTime:2020-08-16 23:47:00 +0000 UTC,LastTransitionTime:2020-08-16 23:46:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 16 23:47:02.732: INFO: New ReplicaSet "test-rollover-deployment-6f68b9c6f9" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-6f68b9c6f9 deployment-7957 /apis/apps/v1/namespaces/deployment-7957/replicasets/test-rollover-deployment-6f68b9c6f9 48152f12-a601-4794-8be4-6b07569d3595 535617 2 2020-08-16 23:46:44 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b2152f9d-df63-4ab3-a511-32c6d68bb8d7 0xc000bd92c7 0xc000bd92c8}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:46:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2152f9d-df63-4ab3-a511-32c6d68bb8d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6f68b9c6f9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bd93d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:47:02.732: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 16 23:47:02.732: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7957 /apis/apps/v1/namespaces/deployment-7957/replicasets/test-rollover-controller d37f0ed2-2bfc-428a-a6e5-33934e65559c 535629 2 2020-08-16 23:46:35 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b2152f9d-df63-4ab3-a511-32c6d68bb8d7 0xc000bd90c7 0xc000bd90c8}] [] [{e2e.test Update apps/v1 2020-08-16 23:46:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2152f9d-df63-4ab3-a511-32c6d68bb8d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000bd9228 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:47:02.732: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-7957 /apis/apps/v1/namespaces/deployment-7957/replicasets/test-rollover-deployment-78bc8b888c cbc0d146-b178-4351-b53d-9b8e05ba7671 535566 2 2020-08-16 23:46:42 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b2152f9d-df63-4ab3-a511-32c6d68bb8d7 0xc000bd9547 0xc000bd9548}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:46:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2152f9d-df63-4ab3-a511-32c6d68bb8d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bd9a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:47:02.735: INFO: Pod "test-rollover-deployment-6f68b9c6f9-qh5dd" is available: &Pod{ObjectMeta:{test-rollover-deployment-6f68b9c6f9-qh5dd test-rollover-deployment-6f68b9c6f9- deployment-7957 /api/v1/namespaces/deployment-7957/pods/test-rollover-deployment-6f68b9c6f9-qh5dd 9a3193ad-1cd3-4a80-a396-f6b74a0ba75a 535587 0 2020-08-16 23:46:44 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6f68b9c6f9 48152f12-a601-4794-8be4-6b07569d3595 0xc001ec0507 0xc001ec0508}] [] [{kube-controller-manager Update v1 2020-08-16 23:46:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48152f12-a601-4794-8be4-6b07569d3595\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:46:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.237\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m7nw4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m7nw4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m7nw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:46:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.237,StartTime:2020-08-16 23:46:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:46:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://837432431be131356cf56a6b5dc641297070385cb9df2fab488801340de60374,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:47:02.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7957" for this suite. • [SLOW TEST:27.335 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":294,"completed":80,"skipped":1491,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:47:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 16 23:47:02.957: INFO: >>> kubeConfig: /root/.kube/config Aug 16 23:47:05.906: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:47:18.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5357" for this suite. • [SLOW TEST:15.610 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":294,"completed":81,"skipped":1492,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:47:18.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 16 23:47:18.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2527' Aug 16 23:47:22.314: INFO: stderr: "" Aug 16 23:47:22.314: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 16 23:47:22.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:22.460: INFO: stderr: "" Aug 16 23:47:22.460: INFO: stdout: "update-demo-nautilus-mqrfg update-demo-nautilus-shfl9 " Aug 16 23:47:22.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqrfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:22.601: INFO: stderr: "" Aug 16 23:47:22.601: INFO: stdout: "" Aug 16 23:47:22.601: INFO: update-demo-nautilus-mqrfg is created but not running Aug 16 23:47:27.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:27.927: INFO: stderr: "" Aug 16 23:47:27.927: INFO: stdout: "update-demo-nautilus-mqrfg update-demo-nautilus-shfl9 " Aug 16 23:47:27.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqrfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:28.010: INFO: stderr: "" Aug 16 23:47:28.010: INFO: stdout: "" Aug 16 23:47:28.010: INFO: update-demo-nautilus-mqrfg is created but not running Aug 16 23:47:33.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:33.222: INFO: stderr: "" Aug 16 23:47:33.222: INFO: stdout: "update-demo-nautilus-mqrfg update-demo-nautilus-shfl9 " Aug 16 23:47:33.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqrfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:33.311: INFO: stderr: "" Aug 16 23:47:33.311: INFO: stdout: "true" Aug 16 23:47:33.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqrfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:33.587: INFO: stderr: "" Aug 16 23:47:33.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 23:47:33.587: INFO: validating pod update-demo-nautilus-mqrfg Aug 16 23:47:33.589: INFO: got data: { "image": "nautilus.jpg" } Aug 16 23:47:33.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 23:47:33.589: INFO: update-demo-nautilus-mqrfg is verified up and running Aug 16 23:47:33.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:33.682: INFO: stderr: "" Aug 16 23:47:33.682: INFO: stdout: "true" Aug 16 23:47:33.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:33.780: INFO: stderr: "" Aug 16 23:47:33.780: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 23:47:33.780: INFO: validating pod update-demo-nautilus-shfl9 Aug 16 23:47:33.784: INFO: got data: { "image": "nautilus.jpg" } Aug 16 23:47:33.784: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 23:47:33.784: INFO: update-demo-nautilus-shfl9 is verified up and running STEP: scaling down the replication controller Aug 16 23:47:33.786: INFO: scanned /root for discovery docs: Aug 16 23:47:33.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2527' Aug 16 23:47:34.980: INFO: stderr: "" Aug 16 23:47:34.980: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 16 23:47:34.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:35.100: INFO: stderr: "" Aug 16 23:47:35.100: INFO: stdout: "update-demo-nautilus-mqrfg update-demo-nautilus-shfl9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 16 23:47:40.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:40.198: INFO: stderr: "" Aug 16 23:47:40.199: INFO: stdout: "update-demo-nautilus-shfl9 " Aug 16 23:47:40.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:40.301: INFO: stderr: "" Aug 16 23:47:40.301: INFO: stdout: "true" Aug 16 23:47:40.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:40.391: INFO: stderr: "" Aug 16 23:47:40.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 23:47:40.392: INFO: validating pod update-demo-nautilus-shfl9 Aug 16 23:47:40.394: INFO: got data: { "image": "nautilus.jpg" } Aug 16 23:47:40.394: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 23:47:40.394: INFO: update-demo-nautilus-shfl9 is verified up and running STEP: scaling up the replication controller Aug 16 23:47:40.395: INFO: scanned /root for discovery docs: Aug 16 23:47:40.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2527' Aug 16 23:47:41.528: INFO: stderr: "" Aug 16 23:47:41.528: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 16 23:47:41.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:41.636: INFO: stderr: "" Aug 16 23:47:41.636: INFO: stdout: "update-demo-nautilus-l9dfn update-demo-nautilus-shfl9 " Aug 16 23:47:41.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9dfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:41.775: INFO: stderr: "" Aug 16 23:47:41.775: INFO: stdout: "" Aug 16 23:47:41.775: INFO: update-demo-nautilus-l9dfn is created but not running Aug 16 23:47:46.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2527' Aug 16 23:47:46.894: INFO: stderr: "" Aug 16 23:47:46.894: INFO: stdout: "update-demo-nautilus-l9dfn update-demo-nautilus-shfl9 " Aug 16 23:47:46.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9dfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:47.012: INFO: stderr: "" Aug 16 23:47:47.012: INFO: stdout: "true" Aug 16 23:47:47.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9dfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:47.126: INFO: stderr: "" Aug 16 23:47:47.126: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 23:47:47.126: INFO: validating pod update-demo-nautilus-l9dfn Aug 16 23:47:47.130: INFO: got data: { "image": "nautilus.jpg" } Aug 16 23:47:47.130: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 23:47:47.130: INFO: update-demo-nautilus-l9dfn is verified up and running Aug 16 23:47:47.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:47.226: INFO: stderr: "" Aug 16 23:47:47.226: INFO: stdout: "true" Aug 16 23:47:47.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shfl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2527' Aug 16 23:47:47.313: INFO: stderr: "" Aug 16 23:47:47.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 23:47:47.313: INFO: validating pod update-demo-nautilus-shfl9 Aug 16 23:47:47.316: INFO: got data: { "image": "nautilus.jpg" } Aug 16 23:47:47.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 23:47:47.316: INFO: update-demo-nautilus-shfl9 is verified up and running STEP: using delete to clean up resources Aug 16 23:47:47.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2527' Aug 16 23:47:47.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 16 23:47:47.439: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 16 23:47:47.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2527' Aug 16 23:47:47.590: INFO: stderr: "No resources found in kubectl-2527 namespace.\n" Aug 16 23:47:47.590: INFO: stdout: "" Aug 16 23:47:47.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2527 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 16 23:47:47.717: INFO: stderr: "" Aug 16 23:47:47.717: INFO: stdout: "update-demo-nautilus-l9dfn\nupdate-demo-nautilus-shfl9\n" Aug 16 23:47:48.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2527' Aug 16 23:47:48.318: INFO: stderr: "No resources found in kubectl-2527 namespace.\n" Aug 16 23:47:48.318: INFO: stdout: "" Aug 16 23:47:48.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2527 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 16 23:47:48.417: INFO: stderr: "" Aug 16 23:47:48.417: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:47:48.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2527" for this suite. • [SLOW TEST:30.074 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":294,"completed":82,"skipped":1493,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:47:48.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:47:48.959: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 16 23:47:52.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9816 create -f -' Aug 16 23:48:10.096: INFO: stderr: "" Aug 16 23:48:10.097: INFO: stdout: "e2e-test-crd-publish-openapi-3435-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 16 23:48:10.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9816 delete e2e-test-crd-publish-openapi-3435-crds test-cr' Aug 16 23:48:10.737: INFO: stderr: "" Aug 16 23:48:10.737: INFO: stdout: "e2e-test-crd-publish-openapi-3435-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 16 23:48:10.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9816 apply -f -' Aug 16 23:48:11.987: INFO: stderr: "" Aug 16 23:48:11.987: INFO: stdout: "e2e-test-crd-publish-openapi-3435-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 16 23:48:11.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9816 delete e2e-test-crd-publish-openapi-3435-crds test-cr' Aug 16 23:48:12.207: INFO: stderr: "" Aug 16 23:48:12.207: INFO: stdout: "e2e-test-crd-publish-openapi-3435-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 16 23:48:12.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3435-crds' Aug 16 23:48:12.991: INFO: stderr: "" Aug 16 23:48:12.991: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3435-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:48:17.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9816" for this suite. • [SLOW TEST:28.747 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":294,"completed":83,"skipped":1505,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:48:17.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-43234520-a071-48fe-9653-5fe792b73158 in namespace container-probe-4988 Aug 16 23:48:21.817: INFO: Started pod liveness-43234520-a071-48fe-9653-5fe792b73158 in namespace container-probe-4988 STEP: checking the pod's current state and verifying that restartCount is present Aug 16 23:48:21.821: INFO: Initial restart count of pod liveness-43234520-a071-48fe-9653-5fe792b73158 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:52:23.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4988" for this suite. • [SLOW TEST:246.158 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":294,"completed":84,"skipped":1509,"failed":0} S ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:52:23.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5227 Aug 16 23:52:30.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 16 23:52:30.556: INFO: stderr: "I0816 23:52:30.489636 1382 log.go:181] (0xc0009d8d10) (0xc0000ffb80) Create stream\nI0816 23:52:30.489691 1382 log.go:181] (0xc0009d8d10) (0xc0000ffb80) Stream added, broadcasting: 1\nI0816 23:52:30.492448 1382 log.go:181] (0xc0009d8d10) Reply frame received for 1\nI0816 23:52:30.492481 1382 log.go:181] (0xc0009d8d10) (0xc000510280) Create stream\nI0816 23:52:30.492490 1382 log.go:181] (0xc0009d8d10) (0xc000510280) Stream added, broadcasting: 3\nI0816 23:52:30.493523 1382 log.go:181] (0xc0009d8d10) Reply frame received for 3\nI0816 23:52:30.493547 1382 log.go:181] (0xc0009d8d10) (0xc000cca0a0) Create stream\nI0816 23:52:30.493554 1382 log.go:181] (0xc0009d8d10) (0xc000cca0a0) Stream added, broadcasting: 5\nI0816 23:52:30.494542 1382 log.go:181] (0xc0009d8d10) Reply frame received for 5\nI0816 23:52:30.543522 1382 log.go:181] (0xc0009d8d10) Data frame received for 5\nI0816 23:52:30.543547 1382 log.go:181] (0xc000cca0a0) (5) Data frame handling\nI0816 23:52:30.543564 1382 log.go:181] (0xc000cca0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0816 23:52:30.546381 1382 log.go:181] (0xc0009d8d10) Data frame received for 3\nI0816 23:52:30.546401 1382 log.go:181] (0xc000510280) (3) Data frame handling\nI0816 23:52:30.546416 1382 log.go:181] (0xc000510280) (3) Data frame sent\nI0816 23:52:30.547166 1382 log.go:181] (0xc0009d8d10) Data frame received for 5\nI0816 23:52:30.547183 1382 log.go:181] (0xc000cca0a0) (5) Data frame handling\nI0816 23:52:30.547198 1382 log.go:181] (0xc0009d8d10) Data frame received for 3\nI0816 23:52:30.547215 1382 log.go:181] (0xc000510280) (3) Data frame handling\nI0816 23:52:30.549065 1382 log.go:181] (0xc0009d8d10) Data frame received for 1\nI0816 23:52:30.549081 1382 log.go:181] (0xc0000ffb80) (1) Data frame handling\nI0816 23:52:30.549093 1382 log.go:181] (0xc0000ffb80) (1) Data frame sent\nI0816 23:52:30.549123 1382 log.go:181] (0xc0009d8d10) (0xc0000ffb80) Stream removed, broadcasting: 1\nI0816 23:52:30.549416 1382 log.go:181] (0xc0009d8d10) (0xc0000ffb80) Stream removed, broadcasting: 1\nI0816 23:52:30.549457 1382 log.go:181] (0xc0009d8d10) Go away received\nI0816 23:52:30.549486 1382 log.go:181] (0xc0009d8d10) (0xc000510280) Stream removed, broadcasting: 3\nI0816 23:52:30.549507 1382 log.go:181] (0xc0009d8d10) (0xc000cca0a0) Stream removed, broadcasting: 5\n" Aug 16 23:52:30.556: INFO: stdout: "iptables" Aug 16 23:52:30.556: INFO: proxyMode: iptables Aug 16 23:52:30.561: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:30.690: INFO: Pod kube-proxy-mode-detector still exists Aug 16 23:52:32.690: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:32.693: INFO: Pod kube-proxy-mode-detector still exists Aug 16 23:52:34.690: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:34.694: INFO: Pod kube-proxy-mode-detector still exists Aug 16 23:52:36.690: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:36.694: INFO: Pod kube-proxy-mode-detector still exists Aug 16 23:52:38.690: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:38.692: INFO: Pod kube-proxy-mode-detector still exists Aug 16 23:52:40.690: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 16 23:52:40.749: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5227 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5227 I0816 23:52:41.402700 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5227, replica count: 3 I0816 23:52:44.453050 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:52:47.453283 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:52:50.453486 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 23:52:53.453745 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 23:52:53.518: INFO: Creating new exec pod Aug 16 23:52:58.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 execpod-affinityhbtsr -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 16 23:52:58.785: INFO: stderr: "I0816 23:52:58.715828 1399 log.go:181] (0xc00043ed10) (0xc000ad4500) Create stream\nI0816 23:52:58.715873 1399 log.go:181] (0xc00043ed10) (0xc000ad4500) Stream added, broadcasting: 1\nI0816 23:52:58.719124 1399 log.go:181] (0xc00043ed10) Reply frame received for 1\nI0816 23:52:58.719160 1399 log.go:181] (0xc00043ed10) (0xc0007a2aa0) Create stream\nI0816 23:52:58.719169 1399 log.go:181] (0xc00043ed10) (0xc0007a2aa0) Stream added, broadcasting: 3\nI0816 23:52:58.720234 1399 log.go:181] (0xc00043ed10) Reply frame received for 3\nI0816 23:52:58.720268 1399 log.go:181] (0xc00043ed10) (0xc0004de5a0) Create stream\nI0816 23:52:58.720286 1399 log.go:181] (0xc00043ed10) (0xc0004de5a0) Stream added, broadcasting: 5\nI0816 23:52:58.721009 1399 log.go:181] (0xc00043ed10) Reply frame received for 5\nI0816 23:52:58.777499 1399 log.go:181] (0xc00043ed10) Data frame received for 3\nI0816 23:52:58.777531 1399 log.go:181] (0xc0007a2aa0) (3) Data frame handling\nI0816 23:52:58.777556 1399 log.go:181] (0xc00043ed10) Data frame received for 5\nI0816 23:52:58.777566 1399 log.go:181] (0xc0004de5a0) (5) Data frame handling\nI0816 23:52:58.777576 1399 log.go:181] (0xc0004de5a0) (5) Data frame sent\nI0816 23:52:58.777584 1399 log.go:181] (0xc00043ed10) Data frame received for 5\nI0816 23:52:58.777590 1399 log.go:181] (0xc0004de5a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0816 23:52:58.779364 1399 log.go:181] (0xc00043ed10) Data frame received for 1\nI0816 23:52:58.779386 1399 log.go:181] (0xc000ad4500) (1) Data frame handling\nI0816 23:52:58.779392 1399 log.go:181] (0xc000ad4500) (1) Data frame sent\nI0816 23:52:58.779401 1399 log.go:181] (0xc00043ed10) (0xc000ad4500) Stream removed, broadcasting: 1\nI0816 23:52:58.779414 1399 log.go:181] (0xc00043ed10) Go away received\nI0816 23:52:58.779769 1399 log.go:181] (0xc00043ed10) (0xc000ad4500) Stream removed, broadcasting: 1\nI0816 23:52:58.779789 1399 log.go:181] (0xc00043ed10) (0xc0007a2aa0) Stream removed, broadcasting: 3\nI0816 23:52:58.779797 1399 log.go:181] (0xc00043ed10) (0xc0004de5a0) Stream removed, broadcasting: 5\n" Aug 16 23:52:58.785: INFO: stdout: "" Aug 16 23:52:58.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 execpod-affinityhbtsr -- /bin/sh -x -c nc -zv -t -w 2 10.102.35.157 80' Aug 16 23:52:58.976: INFO: stderr: "I0816 23:52:58.917179 1417 log.go:181] (0xc000daef20) (0xc000d86460) Create stream\nI0816 23:52:58.917247 1417 log.go:181] (0xc000daef20) (0xc000d86460) Stream added, broadcasting: 1\nI0816 23:52:58.922922 1417 log.go:181] (0xc000daef20) Reply frame received for 1\nI0816 23:52:58.922957 1417 log.go:181] (0xc000daef20) (0xc000aca500) Create stream\nI0816 23:52:58.922973 1417 log.go:181] (0xc000daef20) (0xc000aca500) Stream added, broadcasting: 3\nI0816 23:52:58.923704 1417 log.go:181] (0xc000daef20) Reply frame received for 3\nI0816 23:52:58.923743 1417 log.go:181] (0xc000daef20) (0xc0006d8be0) Create stream\nI0816 23:52:58.923762 1417 log.go:181] (0xc000daef20) (0xc0006d8be0) Stream added, broadcasting: 5\nI0816 23:52:58.924523 1417 log.go:181] (0xc000daef20) Reply frame received for 5\nI0816 23:52:58.967871 1417 log.go:181] (0xc000daef20) Data frame received for 5\nI0816 23:52:58.967891 1417 log.go:181] (0xc0006d8be0) (5) Data frame handling\nI0816 23:52:58.967912 1417 log.go:181] (0xc0006d8be0) (5) Data frame sent\nI0816 23:52:58.967922 1417 log.go:181] (0xc000daef20) Data frame received for 5\nI0816 23:52:58.967926 1417 log.go:181] (0xc0006d8be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.35.157 80\nConnection to 10.102.35.157 80 port [tcp/http] succeeded!\nI0816 23:52:58.967944 1417 log.go:181] (0xc000daef20) Data frame received for 3\nI0816 23:52:58.967968 1417 log.go:181] (0xc000aca500) (3) Data frame handling\nI0816 23:52:58.969550 1417 log.go:181] (0xc000daef20) Data frame received for 1\nI0816 23:52:58.969628 1417 log.go:181] (0xc000d86460) (1) Data frame handling\nI0816 23:52:58.969647 1417 log.go:181] (0xc000d86460) (1) Data frame sent\nI0816 23:52:58.969666 1417 log.go:181] (0xc000daef20) (0xc000d86460) Stream removed, broadcasting: 1\nI0816 23:52:58.969684 1417 log.go:181] (0xc000daef20) Go away received\nI0816 23:52:58.969967 1417 log.go:181] (0xc000daef20) (0xc000d86460) Stream removed, broadcasting: 1\nI0816 23:52:58.969981 1417 log.go:181] (0xc000daef20) (0xc000aca500) Stream removed, broadcasting: 3\nI0816 23:52:58.969993 1417 log.go:181] (0xc000daef20) (0xc0006d8be0) Stream removed, broadcasting: 5\n" Aug 16 23:52:58.976: INFO: stdout: "" Aug 16 23:52:58.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 execpod-affinityhbtsr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.35.157:80/ ; done' Aug 16 23:52:59.285: INFO: stderr: "I0816 23:52:59.115436 1435 log.go:181] (0xc00063e0b0) (0xc000795860) Create stream\nI0816 23:52:59.115502 1435 log.go:181] (0xc00063e0b0) (0xc000795860) Stream added, broadcasting: 1\nI0816 23:52:59.117147 1435 log.go:181] (0xc00063e0b0) Reply frame received for 1\nI0816 23:52:59.117202 1435 log.go:181] (0xc00063e0b0) (0xc000725360) Create stream\nI0816 23:52:59.117216 1435 log.go:181] (0xc00063e0b0) (0xc000725360) Stream added, broadcasting: 3\nI0816 23:52:59.118252 1435 log.go:181] (0xc00063e0b0) Reply frame received for 3\nI0816 23:52:59.118288 1435 log.go:181] (0xc00063e0b0) (0xc000834be0) Create stream\nI0816 23:52:59.118306 1435 log.go:181] (0xc00063e0b0) (0xc000834be0) Stream added, broadcasting: 5\nI0816 23:52:59.119144 1435 log.go:181] (0xc00063e0b0) Reply frame received for 5\nI0816 23:52:59.189537 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.189593 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.189615 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.189655 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.189672 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.189711 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.195189 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.195212 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.195231 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.196089 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.196111 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.196120 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.196129 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.196146 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.196155 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.201242 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.201259 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.201291 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.201626 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.201657 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0816 23:52:59.201672 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.201686 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.201697 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.201714 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.201726 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.201736 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.201750 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n http://10.102.35.157:80/\nI0816 23:52:59.207350 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.207376 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.207403 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.207680 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.207696 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.207708 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.207722 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.207735 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.207752 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.207765 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.207774 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.207788 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.212408 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.212424 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.212439 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.213345 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.213377 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.213386 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.213392 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.213420 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.213465 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.213490 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.213510 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.213547 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.218120 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.218148 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.218172 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.218623 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.218638 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.218645 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.218652 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.218687 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.218706 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.218715 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.218723 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.218729 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.225216 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.225247 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.225275 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.225639 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.225647 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.225652 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.225656 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.225660 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.225698 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.225739 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.225759 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.225780 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.231558 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.231571 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.231577 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.232153 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.232173 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.232186 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.232202 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.232211 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.232229 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.233001 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.233017 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.233030 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.236079 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.236089 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.236095 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.236469 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.236486 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.236496 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.236510 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.236520 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.236532 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.240930 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.240961 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.240992 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.241256 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.241277 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.241297 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.241309 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.241320 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.241337 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.245768 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.245781 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.245794 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.246315 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.246336 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.246346 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.246357 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.246363 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.246369 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.250953 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.250979 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.251002 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.251418 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.251440 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.251457 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.251466 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.251476 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.251493 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.251502 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.251508 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.251527 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.254379 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.254396 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.254407 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.255218 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.255255 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.255272 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.255290 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.255302 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.255331 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.261187 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.261199 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.261211 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.261737 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.261762 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.261789 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.261801 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.261817 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.261828 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.265589 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.265601 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.265609 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.266002 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.266017 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.266032 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.266043 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0816 23:52:59.266056 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.266072 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.266088 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.266112 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.266134 1435 log.go:181] (0xc000834be0) (5) Data frame sent\n 2 http://10.102.35.157:80/\nI0816 23:52:59.270942 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.270963 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.270979 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.271516 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.271536 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.271545 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.271563 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.271579 1435 log.go:181] (0xc000834be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.271589 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.271627 1435 log.go:181] (0xc000834be0) (5) Data frame sent\nI0816 23:52:59.271661 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.271699 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.275594 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.275611 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.275622 1435 log.go:181] (0xc000725360) (3) Data frame sent\nI0816 23:52:59.276540 1435 log.go:181] (0xc00063e0b0) Data frame received for 5\nI0816 23:52:59.276558 1435 log.go:181] (0xc000834be0) (5) Data frame handling\nI0816 23:52:59.276589 1435 log.go:181] (0xc00063e0b0) Data frame received for 3\nI0816 23:52:59.276616 1435 log.go:181] (0xc000725360) (3) Data frame handling\nI0816 23:52:59.278210 1435 log.go:181] (0xc00063e0b0) Data frame received for 1\nI0816 23:52:59.278237 1435 log.go:181] (0xc000795860) (1) Data frame handling\nI0816 23:52:59.278258 1435 log.go:181] (0xc000795860) (1) Data frame sent\nI0816 23:52:59.278420 1435 log.go:181] (0xc00063e0b0) (0xc000795860) Stream removed, broadcasting: 1\nI0816 23:52:59.278452 1435 log.go:181] (0xc00063e0b0) Go away received\nI0816 23:52:59.278727 1435 log.go:181] (0xc00063e0b0) (0xc000795860) Stream removed, broadcasting: 1\nI0816 23:52:59.278748 1435 log.go:181] (0xc00063e0b0) (0xc000725360) Stream removed, broadcasting: 3\nI0816 23:52:59.278756 1435 log.go:181] (0xc00063e0b0) (0xc000834be0) Stream removed, broadcasting: 5\n" Aug 16 23:52:59.285: INFO: stdout: "\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs\naffinity-clusterip-timeout-7r7rs" Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.285: INFO: Received response from host: affinity-clusterip-timeout-7r7rs Aug 16 23:52:59.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 execpod-affinityhbtsr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.35.157:80/' Aug 16 23:52:59.480: INFO: stderr: "I0816 23:52:59.396248 1453 log.go:181] (0xc000d978c0) (0xc000a09680) Create stream\nI0816 23:52:59.396298 1453 log.go:181] (0xc000d978c0) (0xc000a09680) Stream added, broadcasting: 1\nI0816 23:52:59.399710 1453 log.go:181] (0xc000d978c0) Reply frame received for 1\nI0816 23:52:59.399791 1453 log.go:181] (0xc000d978c0) (0xc00068ca00) Create stream\nI0816 23:52:59.399849 1453 log.go:181] (0xc000d978c0) (0xc00068ca00) Stream added, broadcasting: 3\nI0816 23:52:59.402148 1453 log.go:181] (0xc000d978c0) Reply frame received for 3\nI0816 23:52:59.402175 1453 log.go:181] (0xc000d978c0) (0xc0006161e0) Create stream\nI0816 23:52:59.402184 1453 log.go:181] (0xc000d978c0) (0xc0006161e0) Stream added, broadcasting: 5\nI0816 23:52:59.403160 1453 log.go:181] (0xc000d978c0) Reply frame received for 5\nI0816 23:52:59.468799 1453 log.go:181] (0xc000d978c0) Data frame received for 5\nI0816 23:52:59.468821 1453 log.go:181] (0xc0006161e0) (5) Data frame handling\nI0816 23:52:59.468828 1453 log.go:181] (0xc0006161e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:52:59.472629 1453 log.go:181] (0xc000d978c0) Data frame received for 3\nI0816 23:52:59.472640 1453 log.go:181] (0xc00068ca00) (3) Data frame handling\nI0816 23:52:59.472649 1453 log.go:181] (0xc00068ca00) (3) Data frame sent\nI0816 23:52:59.473214 1453 log.go:181] (0xc000d978c0) Data frame received for 5\nI0816 23:52:59.473256 1453 log.go:181] (0xc0006161e0) (5) Data frame handling\nI0816 23:52:59.473283 1453 log.go:181] (0xc000d978c0) Data frame received for 3\nI0816 23:52:59.473302 1453 log.go:181] (0xc00068ca00) (3) Data frame handling\nI0816 23:52:59.474204 1453 log.go:181] (0xc000d978c0) Data frame received for 1\nI0816 23:52:59.474231 1453 log.go:181] (0xc000a09680) (1) Data frame handling\nI0816 23:52:59.474263 1453 log.go:181] (0xc000a09680) (1) Data frame sent\nI0816 23:52:59.474282 1453 log.go:181] (0xc000d978c0) (0xc000a09680) Stream removed, broadcasting: 1\nI0816 23:52:59.474300 1453 log.go:181] (0xc000d978c0) Go away received\nI0816 23:52:59.474638 1453 log.go:181] (0xc000d978c0) (0xc000a09680) Stream removed, broadcasting: 1\nI0816 23:52:59.474655 1453 log.go:181] (0xc000d978c0) (0xc00068ca00) Stream removed, broadcasting: 3\nI0816 23:52:59.474663 1453 log.go:181] (0xc000d978c0) (0xc0006161e0) Stream removed, broadcasting: 5\n" Aug 16 23:52:59.480: INFO: stdout: "affinity-clusterip-timeout-7r7rs" Aug 16 23:53:14.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5227 execpod-affinityhbtsr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.35.157:80/' Aug 16 23:53:14.700: INFO: stderr: "I0816 23:53:14.619179 1471 log.go:181] (0xc00063f290) (0xc000ce17c0) Create stream\nI0816 23:53:14.619228 1471 log.go:181] (0xc00063f290) (0xc000ce17c0) Stream added, broadcasting: 1\nI0816 23:53:14.622970 1471 log.go:181] (0xc00063f290) Reply frame received for 1\nI0816 23:53:14.623010 1471 log.go:181] (0xc00063f290) (0xc0009fa6e0) Create stream\nI0816 23:53:14.623021 1471 log.go:181] (0xc00063f290) (0xc0009fa6e0) Stream added, broadcasting: 3\nI0816 23:53:14.623834 1471 log.go:181] (0xc00063f290) Reply frame received for 3\nI0816 23:53:14.623867 1471 log.go:181] (0xc00063f290) (0xc0009c0140) Create stream\nI0816 23:53:14.623878 1471 log.go:181] (0xc00063f290) (0xc0009c0140) Stream added, broadcasting: 5\nI0816 23:53:14.624685 1471 log.go:181] (0xc00063f290) Reply frame received for 5\nI0816 23:53:14.689715 1471 log.go:181] (0xc00063f290) Data frame received for 5\nI0816 23:53:14.689736 1471 log.go:181] (0xc0009c0140) (5) Data frame handling\nI0816 23:53:14.689748 1471 log.go:181] (0xc0009c0140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.35.157:80/\nI0816 23:53:14.693953 1471 log.go:181] (0xc00063f290) Data frame received for 3\nI0816 23:53:14.693986 1471 log.go:181] (0xc0009fa6e0) (3) Data frame handling\nI0816 23:53:14.694009 1471 log.go:181] (0xc0009fa6e0) (3) Data frame sent\nI0816 23:53:14.694208 1471 log.go:181] (0xc00063f290) Data frame received for 3\nI0816 23:53:14.694226 1471 log.go:181] (0xc0009fa6e0) (3) Data frame handling\nI0816 23:53:14.694251 1471 log.go:181] (0xc00063f290) Data frame received for 5\nI0816 23:53:14.694263 1471 log.go:181] (0xc0009c0140) (5) Data frame handling\nI0816 23:53:14.695599 1471 log.go:181] (0xc00063f290) Data frame received for 1\nI0816 23:53:14.695617 1471 log.go:181] (0xc000ce17c0) (1) Data frame handling\nI0816 23:53:14.695637 1471 log.go:181] (0xc000ce17c0) (1) Data frame sent\nI0816 23:53:14.695655 1471 log.go:181] (0xc00063f290) (0xc000ce17c0) Stream removed, broadcasting: 1\nI0816 23:53:14.695815 1471 log.go:181] (0xc00063f290) Go away received\nI0816 23:53:14.696064 1471 log.go:181] (0xc00063f290) (0xc000ce17c0) Stream removed, broadcasting: 1\nI0816 23:53:14.696081 1471 log.go:181] (0xc00063f290) (0xc0009fa6e0) Stream removed, broadcasting: 3\nI0816 23:53:14.696089 1471 log.go:181] (0xc00063f290) (0xc0009c0140) Stream removed, broadcasting: 5\n" Aug 16 23:53:14.701: INFO: stdout: "affinity-clusterip-timeout-ngjcp" Aug 16 23:53:14.701: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5227, will wait for the garbage collector to delete the pods Aug 16 23:53:14.822: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.313987ms Aug 16 23:53:15.322: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.204428ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:53:29.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5227" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:66.427 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":85,"skipped":1510,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:53:29.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e358e8b9-020b-468f-a99e-41cd4883b5b0 STEP: Creating secret with name s-test-opt-upd-f18f02d6-ee9f-47a1-9dba-8873d4e84d4a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e358e8b9-020b-468f-a99e-41cd4883b5b0 STEP: Updating secret s-test-opt-upd-f18f02d6-ee9f-47a1-9dba-8873d4e84d4a STEP: Creating secret with name s-test-opt-create-a531edc7-81cd-48cd-a75e-61f2393be608 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:54:55.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1395" for this suite. • [SLOW TEST:85.714 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":86,"skipped":1517,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:54:55.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-762.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-762.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-762.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-762.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-762.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-762.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 23:55:05.633: INFO: DNS probes using dns-762/dns-test-4e79fc63-6bbb-4dfa-90d6-ef794ec79843 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:05.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-762" for this suite. • [SLOW TEST:10.478 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":294,"completed":87,"skipped":1535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:05.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:55:06.686: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7" in namespace "security-context-test-8883" to be "Succeeded or Failed" Aug 16 23:55:06.919: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7": Phase="Pending", Reason="", readiness=false. Elapsed: 233.023542ms Aug 16 23:55:08.966: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280308465s Aug 16 23:55:13.489: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.80252649s Aug 16 23:55:15.683: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7": Phase="Running", Reason="", readiness=true. Elapsed: 8.996588601s Aug 16 23:55:17.686: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.000300526s Aug 16 23:55:17.686: INFO: Pod "alpine-nnp-false-cd7fee66-1132-48b6-86f1-c2beea8907a7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:17.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8883" for this suite. • [SLOW TEST:11.878 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":88,"skipped":1569,"failed":0} [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:17.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-5242671c-4715-46e2-97f0-8c71e89ef255 STEP: Creating configMap with name cm-test-opt-upd-78725e21-91cc-403c-bdbf-68ac0a89993f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5242671c-4715-46e2-97f0-8c71e89ef255 STEP: Updating configmap cm-test-opt-upd-78725e21-91cc-403c-bdbf-68ac0a89993f STEP: Creating configMap with name cm-test-opt-create-4c95d63c-7d6d-4c50-a119-b29c4905b2c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:32.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9403" for this suite. • [SLOW TEST:15.095 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":89,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:32.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:55:34.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874" in namespace "projected-9492" to be "Succeeded or Failed" Aug 16 23:55:34.087: INFO: Pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874": Phase="Pending", Reason="", readiness=false. Elapsed: 74.612984ms Aug 16 23:55:36.237: INFO: Pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224488364s Aug 16 23:55:38.241: INFO: Pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228324962s Aug 16 23:55:40.245: INFO: Pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232100776s STEP: Saw pod success Aug 16 23:55:40.245: INFO: Pod "downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874" satisfied condition "Succeeded or Failed" Aug 16 23:55:40.247: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874 container client-container: STEP: delete the pod Aug 16 23:55:40.563: INFO: Waiting for pod downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874 to disappear Aug 16 23:55:40.646: INFO: Pod downwardapi-volume-5877c5ef-db22-4b57-a330-464688ee5874 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:40.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9492" for this suite. • [SLOW TEST:7.799 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":90,"skipped":1601,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:40.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-aa983f76-29fe-45a3-99aa-862b207bcf12 STEP: Creating a pod to test consume secrets Aug 16 23:55:41.222: INFO: Waiting up to 5m0s for pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807" in namespace "secrets-2757" to be "Succeeded or Failed" Aug 16 23:55:41.233: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807": Phase="Pending", Reason="", readiness=false. Elapsed: 10.252652ms Aug 16 23:55:43.436: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21379452s Aug 16 23:55:45.440: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217261905s Aug 16 23:55:47.598: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807": Phase="Running", Reason="", readiness=true. Elapsed: 6.376093212s Aug 16 23:55:49.602: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.3793242s STEP: Saw pod success Aug 16 23:55:49.602: INFO: Pod "pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807" satisfied condition "Succeeded or Failed" Aug 16 23:55:49.603: INFO: Trying to get logs from node latest-worker pod pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807 container secret-volume-test: STEP: delete the pod Aug 16 23:55:49.666: INFO: Waiting for pod pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807 to disappear Aug 16 23:55:49.712: INFO: Pod pod-secrets-c9edb9f2-05eb-4a25-8f5f-de291bfe0807 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:49.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2757" for this suite. STEP: Destroying namespace "secret-namespace-2329" for this suite. • [SLOW TEST:9.130 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":294,"completed":91,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:49.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 16 23:55:50.373: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9743 /api/v1/namespaces/watch-9743/configmaps/e2e-watch-test-resource-version 5d5b5500-11b9-4818-a822-5b90d93ce3c8 537628 0 2020-08-16 23:55:50 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-16 23:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 16 23:55:50.373: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9743 /api/v1/namespaces/watch-9743/configmaps/e2e-watch-test-resource-version 5d5b5500-11b9-4818-a822-5b90d93ce3c8 537629 0 2020-08-16 23:55:50 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-16 23:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:55:50.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9743" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":294,"completed":92,"skipped":1625,"failed":0} SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:55:50.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:55:50.500: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:56:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6580" for this suite. • [SLOW TEST:10.268 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":294,"completed":93,"skipped":1628,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:56:00.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 16 23:56:15.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 23:56:15.444: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 23:56:17.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 23:56:17.454: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 23:56:19.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 23:56:19.449: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 23:56:21.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 23:56:21.461: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:56:21.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2487" for this suite. • [SLOW TEST:20.811 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":294,"completed":94,"skipped":1640,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:56:21.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:56:21.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2" in namespace "downward-api-3557" to be "Succeeded or Failed" Aug 16 23:56:21.831: INFO: Pod "downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.563369ms Aug 16 23:56:23.914: INFO: Pod "downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110268057s Aug 16 23:56:25.920: INFO: Pod "downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116272637s STEP: Saw pod success Aug 16 23:56:25.921: INFO: Pod "downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2" satisfied condition "Succeeded or Failed" Aug 16 23:56:25.923: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2 container client-container: STEP: delete the pod Aug 16 23:56:26.018: INFO: Waiting for pod downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2 to disappear Aug 16 23:56:26.232: INFO: Pod downwardapi-volume-42fd5f8d-1308-4845-aa1b-f48fe6e14dd2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:56:26.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3557" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":95,"skipped":1643,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:56:26.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 16 23:56:26.377: INFO: Waiting up to 5m0s for pod "var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95" in namespace "var-expansion-6471" to be "Succeeded or Failed" Aug 16 23:56:26.413: INFO: Pod "var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95": Phase="Pending", Reason="", readiness=false. Elapsed: 35.461175ms Aug 16 23:56:28.420: INFO: Pod "var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042155896s Aug 16 23:56:30.423: INFO: Pod "var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045892537s STEP: Saw pod success Aug 16 23:56:30.423: INFO: Pod "var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95" satisfied condition "Succeeded or Failed" Aug 16 23:56:30.426: INFO: Trying to get logs from node latest-worker pod var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95 container dapi-container: STEP: delete the pod Aug 16 23:56:30.471: INFO: Waiting for pod var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95 to disappear Aug 16 23:56:30.474: INFO: Pod var-expansion-9cc2484c-5572-45f8-a031-8344829b0c95 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:56:30.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6471" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":294,"completed":96,"skipped":1662,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:56:30.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 16 23:56:30.552: INFO: Waiting up to 5m0s for pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885" in namespace "emptydir-3163" to be "Succeeded or Failed" Aug 16 23:56:30.563: INFO: Pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885": Phase="Pending", Reason="", readiness=false. Elapsed: 11.14329ms Aug 16 23:56:32.616: INFO: Pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064316363s Aug 16 23:56:34.981: INFO: Pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885": Phase="Running", Reason="", readiness=true. Elapsed: 4.429458211s Aug 16 23:56:37.192: INFO: Pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.639941044s STEP: Saw pod success Aug 16 23:56:37.192: INFO: Pod "pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885" satisfied condition "Succeeded or Failed" Aug 16 23:56:37.195: INFO: Trying to get logs from node latest-worker pod pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885 container test-container: STEP: delete the pod Aug 16 23:56:37.504: INFO: Waiting for pod pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885 to disappear Aug 16 23:56:37.514: INFO: Pod pod-7a746d28-d6a2-488e-91d2-8e6cfc1cb885 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:56:37.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3163" for this suite. • [SLOW TEST:7.047 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":97,"skipped":1662,"failed":0} [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:56:37.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-fd64884d-0c4b-41b7-9bbd-03362a4642b9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-fd64884d-0c4b-41b7-9bbd-03362a4642b9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:57:57.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8377" for this suite. • [SLOW TEST:79.996 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":98,"skipped":1662,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:57:57.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 16 23:57:57.734: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:10.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-980" for this suite. • [SLOW TEST:12.874 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":294,"completed":99,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:10.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 16 23:58:19.047: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:19.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7260" for this suite. • [SLOW TEST:8.827 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":100,"skipped":1700,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:19.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:58:19.662: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:20.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1074" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":294,"completed":101,"skipped":1709,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:20.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 16 23:58:21.202: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:37.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-517" for this suite. • [SLOW TEST:17.463 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":294,"completed":102,"skipped":1721,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:38.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:53.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3331" for this suite. • [SLOW TEST:14.856 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":294,"completed":103,"skipped":1726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:53.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:58:53.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e" in namespace "downward-api-3144" to be "Succeeded or Failed" Aug 16 23:58:54.031: INFO: Pod "downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e": Phase="Pending", Reason="", readiness=false. Elapsed: 121.062239ms Aug 16 23:58:56.035: INFO: Pod "downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125052155s Aug 16 23:58:58.055: INFO: Pod "downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145198907s STEP: Saw pod success Aug 16 23:58:58.055: INFO: Pod "downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e" satisfied condition "Succeeded or Failed" Aug 16 23:58:58.058: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e container client-container: STEP: delete the pod Aug 16 23:58:58.089: INFO: Waiting for pod downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e to disappear Aug 16 23:58:58.093: INFO: Pod downwardapi-volume-cb080bcb-1330-4d83-9711-d0329312916e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:58:58.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3144" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":104,"skipped":1762,"failed":0} S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:58:58.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 16 23:58:58.642: INFO: Waiting up to 5m0s for pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2" in namespace "downward-api-4283" to be "Succeeded or Failed" Aug 16 23:58:58.784: INFO: Pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2": Phase="Pending", Reason="", readiness=false. Elapsed: 142.086465ms Aug 16 23:59:00.788: INFO: Pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14615657s Aug 16 23:59:02.792: INFO: Pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150422139s Aug 16 23:59:05.163: INFO: Pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.521542338s STEP: Saw pod success Aug 16 23:59:05.163: INFO: Pod "downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2" satisfied condition "Succeeded or Failed" Aug 16 23:59:05.208: INFO: Trying to get logs from node latest-worker2 pod downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2 container dapi-container: STEP: delete the pod Aug 16 23:59:05.602: INFO: Waiting for pod downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2 to disappear Aug 16 23:59:05.962: INFO: Pod downward-api-672d5986-ff0b-4e2b-9ae8-1a11bf7838b2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:05.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4283" for this suite. • [SLOW TEST:7.871 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":294,"completed":105,"skipped":1763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:05.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 16 23:59:06.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa" in namespace "downward-api-668" to be "Succeeded or Failed" Aug 16 23:59:06.570: INFO: Pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa": Phase="Pending", Reason="", readiness=false. Elapsed: 340.637869ms Aug 16 23:59:08.574: INFO: Pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344526317s Aug 16 23:59:10.579: INFO: Pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.348882335s Aug 16 23:59:12.583: INFO: Pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.353007828s STEP: Saw pod success Aug 16 23:59:12.583: INFO: Pod "downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa" satisfied condition "Succeeded or Failed" Aug 16 23:59:12.585: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa container client-container: STEP: delete the pod Aug 16 23:59:12.687: INFO: Waiting for pod downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa to disappear Aug 16 23:59:12.692: INFO: Pod downwardapi-volume-db3a167c-949e-4769-8d06-b44c6ecf09aa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:12.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-668" for this suite. • [SLOW TEST:6.726 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":106,"skipped":1793,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:12.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8e6fc842-c7e5-4631-9bb5-7cadc0b21ed9 STEP: Creating a pod to test consume configMaps Aug 16 23:59:12.841: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392" in namespace "projected-9033" to be "Succeeded or Failed" Aug 16 23:59:12.854: INFO: Pod "pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392": Phase="Pending", Reason="", readiness=false. Elapsed: 12.55291ms Aug 16 23:59:14.857: INFO: Pod "pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015953477s Aug 16 23:59:16.862: INFO: Pod "pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020267976s STEP: Saw pod success Aug 16 23:59:16.862: INFO: Pod "pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392" satisfied condition "Succeeded or Failed" Aug 16 23:59:16.865: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392 container projected-configmap-volume-test: STEP: delete the pod Aug 16 23:59:16.901: INFO: Waiting for pod pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392 to disappear Aug 16 23:59:16.914: INFO: Pod pod-projected-configmaps-88dc1906-36b9-443e-a622-df270544f392 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:16.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9033" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":107,"skipped":1808,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:16.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:59:17.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:59:19.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:59:21.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219157, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:59:25.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:25.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4875" for this suite. STEP: Destroying namespace "webhook-4875-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.805 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":294,"completed":108,"skipped":1810,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:25.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 16 23:59:25.862: INFO: Waiting up to 5m0s for pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8" in namespace "emptydir-5476" to be "Succeeded or Failed" Aug 16 23:59:26.177: INFO: Pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8": Phase="Pending", Reason="", readiness=false. Elapsed: 315.370161ms Aug 16 23:59:28.183: INFO: Pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32058081s Aug 16 23:59:30.195: INFO: Pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332701734s Aug 16 23:59:32.199: INFO: Pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.336923779s STEP: Saw pod success Aug 16 23:59:32.199: INFO: Pod "pod-7388178c-10a5-401b-b972-eaae3b37dda8" satisfied condition "Succeeded or Failed" Aug 16 23:59:32.202: INFO: Trying to get logs from node latest-worker2 pod pod-7388178c-10a5-401b-b972-eaae3b37dda8 container test-container: STEP: delete the pod Aug 16 23:59:32.260: INFO: Waiting for pod pod-7388178c-10a5-401b-b972-eaae3b37dda8 to disappear Aug 16 23:59:32.268: INFO: Pod pod-7388178c-10a5-401b-b972-eaae3b37dda8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:32.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5476" for this suite. • [SLOW TEST:6.549 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":109,"skipped":1824,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:32.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 23:59:32.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 23:59:34.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219172, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219172, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219173, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219172, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 23:59:38.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:59:38.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6312-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:39.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7301" for this suite. STEP: Destroying namespace "webhook-7301-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.635 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":294,"completed":110,"skipped":1834,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:39.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 16 23:59:40.115: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 16 23:59:40.167: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 16 23:59:45.216: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 16 23:59:45.217: INFO: Creating deployment "test-rolling-update-deployment" Aug 16 23:59:45.221: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 16 23:59:45.872: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 16 23:59:47.879: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 16 23:59:47.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219185, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-5887db9c6b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:59:49.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219185, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-5887db9c6b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 23:59:51.886: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 16 23:59:51.894: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1681 /apis/apps/v1/namespaces/deployment-1681/deployments/test-rolling-update-deployment cec680fd-5294-4b02-a0c8-dfdba4f72b00 538895 1 2020-08-16 23:59:45 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-16 23:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00373ad98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-16 23:59:46 +0000 UTC,LastTransitionTime:2020-08-16 23:59:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-5887db9c6b" has successfully progressed.,LastUpdateTime:2020-08-16 23:59:50 +0000 UTC,LastTransitionTime:2020-08-16 23:59:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 16 23:59:51.897: INFO: New ReplicaSet "test-rolling-update-deployment-5887db9c6b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-5887db9c6b deployment-1681 /apis/apps/v1/namespaces/deployment-1681/replicasets/test-rolling-update-deployment-5887db9c6b 96c9dfbb-f798-48c4-8161-80545b1c2e96 538883 1 2020-08-16 23:59:45 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cec680fd-5294-4b02-a0c8-dfdba4f72b00 0xc00373b727 0xc00373b728}] [] [{kube-controller-manager Update apps/v1 2020-08-16 23:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cec680fd-5294-4b02-a0c8-dfdba4f72b00\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 5887db9c6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00373b8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:59:51.897: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 16 23:59:51.897: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1681 /apis/apps/v1/namespaces/deployment-1681/replicasets/test-rolling-update-controller 823eb884-691f-4b77-bac4-228a7e88df96 538894 2 2020-08-16 23:59:40 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cec680fd-5294-4b02-a0c8-dfdba4f72b00 0xc00373b5f7 0xc00373b5f8}] [] [{e2e.test Update apps/v1 2020-08-16 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-16 23:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cec680fd-5294-4b02-a0c8-dfdba4f72b00\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00373b6b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 23:59:51.900: INFO: Pod "test-rolling-update-deployment-5887db9c6b-qrccr" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-5887db9c6b-qrccr test-rolling-update-deployment-5887db9c6b- deployment-1681 /api/v1/namespaces/deployment-1681/pods/test-rolling-update-deployment-5887db9c6b-qrccr 36615232-6520-469f-a595-986058bce26f 538882 0 2020-08-16 23:59:45 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-5887db9c6b 96c9dfbb-f798-48c4-8161-80545b1c2e96 0xc0032a0917 0xc0032a0918}] [] [{kube-controller-manager Update v1 2020-08-16 23:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96c9dfbb-f798-48c4-8161-80545b1c2e96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-16 23:59:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vw2w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vw2w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vw2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:59:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:59:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 23:59:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.3,StartTime:2020-08-16 23:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 23:59:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://09e7e00a5e59aa8ed9acae8c0a1ac81206d3b991aaf4e2fa5d2ceb9c701120bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 16 23:59:51.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1681" for this suite. • [SLOW TEST:11.995 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":111,"skipped":1845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 16 23:59:51.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 16 23:59:52.512: INFO: Waiting up to 5m0s for pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b" in namespace "emptydir-2896" to be "Succeeded or Failed" Aug 16 23:59:52.913: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b": Phase="Pending", Reason="", readiness=false. Elapsed: 400.822762ms Aug 16 23:59:54.917: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405232888s Aug 16 23:59:56.922: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410083002s Aug 16 23:59:58.926: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b": Phase="Running", Reason="", readiness=true. Elapsed: 6.41375909s Aug 17 00:00:00.929: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.417408889s STEP: Saw pod success Aug 17 00:00:00.929: INFO: Pod "pod-eef30b9c-fc89-4296-ae23-5389194c539b" satisfied condition "Succeeded or Failed" Aug 17 00:00:00.932: INFO: Trying to get logs from node latest-worker pod pod-eef30b9c-fc89-4296-ae23-5389194c539b container test-container: STEP: delete the pod Aug 17 00:00:00.962: INFO: Waiting for pod pod-eef30b9c-fc89-4296-ae23-5389194c539b to disappear Aug 17 00:00:00.970: INFO: Pod pod-eef30b9c-fc89-4296-ae23-5389194c539b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:00.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2896" for this suite. • [SLOW TEST:9.069 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":112,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:00.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:00:02.210: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:00:04.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:00:06.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219202, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:00:09.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 17 00:00:09.314: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:09.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3824" for this suite. STEP: Destroying namespace "webhook-3824-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.306 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":294,"completed":113,"skipped":1944,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:10.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-f63d3f22-7bfe-4327-b137-4a30fa9c45e1 STEP: Creating a pod to test consume secrets Aug 17 00:00:11.711: INFO: Waiting up to 5m0s for pod "pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd" in namespace "secrets-7444" to be "Succeeded or Failed" Aug 17 00:00:11.744: INFO: Pod "pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.081014ms Aug 17 00:00:13.747: INFO: Pod "pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036398253s Aug 17 00:00:15.769: INFO: Pod "pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057765333s STEP: Saw pod success Aug 17 00:00:15.769: INFO: Pod "pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd" satisfied condition "Succeeded or Failed" Aug 17 00:00:15.771: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd container secret-volume-test: STEP: delete the pod Aug 17 00:00:15.809: INFO: Waiting for pod pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd to disappear Aug 17 00:00:15.815: INFO: Pod pod-secrets-a043932a-0d67-4cfc-95f6-e63c6767b5dd no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:15.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7444" for this suite. • [SLOW TEST:5.540 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":114,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:15.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:25.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-930" for this suite. • [SLOW TEST:9.191 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":294,"completed":115,"skipped":1977,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:25.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4761 STEP: creating replication controller nodeport-test in namespace services-4761 I0817 00:00:25.302791 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4761, replica count: 2 I0817 00:00:28.353176 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:00:31.353400 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:00:34.353631 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 00:00:34.353: INFO: Creating new exec pod Aug 17 00:00:39.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4761 execpodbvc2v -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 17 00:00:42.735: INFO: stderr: "I0817 00:00:42.642985 1489 log.go:181] (0xc00003b4a0) (0xc0008a3220) Create stream\nI0817 00:00:42.643032 1489 log.go:181] (0xc00003b4a0) (0xc0008a3220) Stream added, broadcasting: 1\nI0817 00:00:42.644568 1489 log.go:181] (0xc00003b4a0) Reply frame received for 1\nI0817 00:00:42.644612 1489 log.go:181] (0xc00003b4a0) (0xc000808820) Create stream\nI0817 00:00:42.644625 1489 log.go:181] (0xc00003b4a0) (0xc000808820) Stream added, broadcasting: 3\nI0817 00:00:42.645545 1489 log.go:181] (0xc00003b4a0) Reply frame received for 3\nI0817 00:00:42.645581 1489 log.go:181] (0xc00003b4a0) (0xc000809860) Create stream\nI0817 00:00:42.645592 1489 log.go:181] (0xc00003b4a0) (0xc000809860) Stream added, broadcasting: 5\nI0817 00:00:42.646443 1489 log.go:181] (0xc00003b4a0) Reply frame received for 5\nI0817 00:00:42.726395 1489 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0817 00:00:42.726424 1489 log.go:181] (0xc000809860) (5) Data frame handling\nI0817 00:00:42.726438 1489 log.go:181] (0xc000809860) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0817 00:00:42.726835 1489 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0817 00:00:42.726851 1489 log.go:181] (0xc000809860) (5) Data frame handling\nI0817 00:00:42.726870 1489 log.go:181] (0xc000809860) (5) Data frame sent\nI0817 00:00:42.726893 1489 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0817 00:00:42.726908 1489 log.go:181] (0xc000809860) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0817 00:00:42.727080 1489 log.go:181] (0xc00003b4a0) Data frame received for 3\nI0817 00:00:42.727096 1489 log.go:181] (0xc000808820) (3) Data frame handling\nI0817 00:00:42.728934 1489 log.go:181] (0xc00003b4a0) Data frame received for 1\nI0817 00:00:42.728958 1489 log.go:181] (0xc0008a3220) (1) Data frame handling\nI0817 00:00:42.728972 1489 log.go:181] (0xc0008a3220) (1) Data frame sent\nI0817 00:00:42.728986 1489 log.go:181] (0xc00003b4a0) (0xc0008a3220) Stream removed, broadcasting: 1\nI0817 00:00:42.729002 1489 log.go:181] (0xc00003b4a0) Go away received\nI0817 00:00:42.729407 1489 log.go:181] (0xc00003b4a0) (0xc0008a3220) Stream removed, broadcasting: 1\nI0817 00:00:42.729422 1489 log.go:181] (0xc00003b4a0) (0xc000808820) Stream removed, broadcasting: 3\nI0817 00:00:42.729428 1489 log.go:181] (0xc00003b4a0) (0xc000809860) Stream removed, broadcasting: 5\n" Aug 17 00:00:42.736: INFO: stdout: "" Aug 17 00:00:42.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4761 execpodbvc2v -- /bin/sh -x -c nc -zv -t -w 2 10.109.96.136 80' Aug 17 00:00:42.959: INFO: stderr: "I0817 00:00:42.881840 1507 log.go:181] (0xc00069af20) (0xc0003fc000) Create stream\nI0817 00:00:42.881909 1507 log.go:181] (0xc00069af20) (0xc0003fc000) Stream added, broadcasting: 1\nI0817 00:00:42.883722 1507 log.go:181] (0xc00069af20) Reply frame received for 1\nI0817 00:00:42.883748 1507 log.go:181] (0xc00069af20) (0xc0004e25a0) Create stream\nI0817 00:00:42.883758 1507 log.go:181] (0xc00069af20) (0xc0004e25a0) Stream added, broadcasting: 3\nI0817 00:00:42.884364 1507 log.go:181] (0xc00069af20) Reply frame received for 3\nI0817 00:00:42.884386 1507 log.go:181] (0xc00069af20) (0xc00011ac80) Create stream\nI0817 00:00:42.884393 1507 log.go:181] (0xc00069af20) (0xc00011ac80) Stream added, broadcasting: 5\nI0817 00:00:42.885083 1507 log.go:181] (0xc00069af20) Reply frame received for 5\nI0817 00:00:42.950528 1507 log.go:181] (0xc00069af20) Data frame received for 5\nI0817 00:00:42.950557 1507 log.go:181] (0xc00011ac80) (5) Data frame handling\nI0817 00:00:42.950564 1507 log.go:181] (0xc00011ac80) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.96.136 80\nConnection to 10.109.96.136 80 port [tcp/http] succeeded!\nI0817 00:00:42.950581 1507 log.go:181] (0xc00069af20) Data frame received for 3\nI0817 00:00:42.950613 1507 log.go:181] (0xc0004e25a0) (3) Data frame handling\nI0817 00:00:42.950636 1507 log.go:181] (0xc00069af20) Data frame received for 5\nI0817 00:00:42.950646 1507 log.go:181] (0xc00011ac80) (5) Data frame handling\nI0817 00:00:42.952202 1507 log.go:181] (0xc00069af20) Data frame received for 1\nI0817 00:00:42.952236 1507 log.go:181] (0xc0003fc000) (1) Data frame handling\nI0817 00:00:42.952271 1507 log.go:181] (0xc0003fc000) (1) Data frame sent\nI0817 00:00:42.952294 1507 log.go:181] (0xc00069af20) (0xc0003fc000) Stream removed, broadcasting: 1\nI0817 00:00:42.952330 1507 log.go:181] (0xc00069af20) Go away received\nI0817 00:00:42.952982 1507 log.go:181] (0xc00069af20) (0xc0003fc000) Stream removed, broadcasting: 1\nI0817 00:00:42.953016 1507 log.go:181] (0xc00069af20) (0xc0004e25a0) Stream removed, broadcasting: 3\nI0817 00:00:42.953035 1507 log.go:181] (0xc00069af20) (0xc00011ac80) Stream removed, broadcasting: 5\n" Aug 17 00:00:42.959: INFO: stdout: "" Aug 17 00:00:42.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4761 execpodbvc2v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32530' Aug 17 00:00:43.279: INFO: stderr: "I0817 00:00:43.196843 1525 log.go:181] (0xc000b7cdc0) (0xc000c288c0) Create stream\nI0817 00:00:43.196905 1525 log.go:181] (0xc000b7cdc0) (0xc000c288c0) Stream added, broadcasting: 1\nI0817 00:00:43.198825 1525 log.go:181] (0xc000b7cdc0) Reply frame received for 1\nI0817 00:00:43.198872 1525 log.go:181] (0xc000b7cdc0) (0xc000b210e0) Create stream\nI0817 00:00:43.198886 1525 log.go:181] (0xc000b7cdc0) (0xc000b210e0) Stream added, broadcasting: 3\nI0817 00:00:43.199638 1525 log.go:181] (0xc000b7cdc0) Reply frame received for 3\nI0817 00:00:43.199671 1525 log.go:181] (0xc000b7cdc0) (0xc0003b40a0) Create stream\nI0817 00:00:43.199682 1525 log.go:181] (0xc000b7cdc0) (0xc0003b40a0) Stream added, broadcasting: 5\nI0817 00:00:43.201016 1525 log.go:181] (0xc000b7cdc0) Reply frame received for 5\nI0817 00:00:43.271436 1525 log.go:181] (0xc000b7cdc0) Data frame received for 5\nI0817 00:00:43.271494 1525 log.go:181] (0xc0003b40a0) (5) Data frame handling\nI0817 00:00:43.271514 1525 log.go:181] (0xc0003b40a0) (5) Data frame sent\nI0817 00:00:43.271527 1525 log.go:181] (0xc000b7cdc0) Data frame received for 5\nI0817 00:00:43.271537 1525 log.go:181] (0xc0003b40a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32530\nConnection to 172.18.0.11 32530 port [tcp/32530] succeeded!\nI0817 00:00:43.271575 1525 log.go:181] (0xc000b7cdc0) Data frame received for 3\nI0817 00:00:43.271590 1525 log.go:181] (0xc000b210e0) (3) Data frame handling\nI0817 00:00:43.273321 1525 log.go:181] (0xc000b7cdc0) Data frame received for 1\nI0817 00:00:43.273346 1525 log.go:181] (0xc000c288c0) (1) Data frame handling\nI0817 00:00:43.273354 1525 log.go:181] (0xc000c288c0) (1) Data frame sent\nI0817 00:00:43.273374 1525 log.go:181] (0xc000b7cdc0) (0xc000c288c0) Stream removed, broadcasting: 1\nI0817 00:00:43.273458 1525 log.go:181] (0xc000b7cdc0) Go away received\nI0817 00:00:43.273744 1525 log.go:181] (0xc000b7cdc0) (0xc000c288c0) Stream removed, broadcasting: 1\nI0817 00:00:43.273759 1525 log.go:181] (0xc000b7cdc0) (0xc000b210e0) Stream removed, broadcasting: 3\nI0817 00:00:43.273766 1525 log.go:181] (0xc000b7cdc0) (0xc0003b40a0) Stream removed, broadcasting: 5\n" Aug 17 00:00:43.279: INFO: stdout: "" Aug 17 00:00:43.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4761 execpodbvc2v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32530' Aug 17 00:00:43.496: INFO: stderr: "I0817 00:00:43.409644 1543 log.go:181] (0xc000d32fd0) (0xc000c0fb80) Create stream\nI0817 00:00:43.409728 1543 log.go:181] (0xc000d32fd0) (0xc000c0fb80) Stream added, broadcasting: 1\nI0817 00:00:43.422873 1543 log.go:181] (0xc000d32fd0) Reply frame received for 1\nI0817 00:00:43.422913 1543 log.go:181] (0xc000d32fd0) (0xc000820b40) Create stream\nI0817 00:00:43.422923 1543 log.go:181] (0xc000d32fd0) (0xc000820b40) Stream added, broadcasting: 3\nI0817 00:00:43.423576 1543 log.go:181] (0xc000d32fd0) Reply frame received for 3\nI0817 00:00:43.423601 1543 log.go:181] (0xc000d32fd0) (0xc000966fa0) Create stream\nI0817 00:00:43.423608 1543 log.go:181] (0xc000d32fd0) (0xc000966fa0) Stream added, broadcasting: 5\nI0817 00:00:43.424246 1543 log.go:181] (0xc000d32fd0) Reply frame received for 5\nI0817 00:00:43.487219 1543 log.go:181] (0xc000d32fd0) Data frame received for 3\nI0817 00:00:43.487257 1543 log.go:181] (0xc000820b40) (3) Data frame handling\nI0817 00:00:43.487277 1543 log.go:181] (0xc000d32fd0) Data frame received for 5\nI0817 00:00:43.487284 1543 log.go:181] (0xc000966fa0) (5) Data frame handling\nI0817 00:00:43.487302 1543 log.go:181] (0xc000966fa0) (5) Data frame sent\nI0817 00:00:43.487316 1543 log.go:181] (0xc000d32fd0) Data frame received for 5\nI0817 00:00:43.487324 1543 log.go:181] (0xc000966fa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32530\nConnection to 172.18.0.14 32530 port [tcp/32530] succeeded!\nI0817 00:00:43.488419 1543 log.go:181] (0xc000d32fd0) Data frame received for 1\nI0817 00:00:43.488432 1543 log.go:181] (0xc000c0fb80) (1) Data frame handling\nI0817 00:00:43.488441 1543 log.go:181] (0xc000c0fb80) (1) Data frame sent\nI0817 00:00:43.488563 1543 log.go:181] (0xc000d32fd0) (0xc000c0fb80) Stream removed, broadcasting: 1\nI0817 00:00:43.488646 1543 log.go:181] (0xc000d32fd0) Go away received\nI0817 00:00:43.489146 1543 log.go:181] (0xc000d32fd0) (0xc000c0fb80) Stream removed, broadcasting: 1\nI0817 00:00:43.489164 1543 log.go:181] (0xc000d32fd0) (0xc000820b40) Stream removed, broadcasting: 3\nI0817 00:00:43.489228 1543 log.go:181] (0xc000d32fd0) (0xc000966fa0) Stream removed, broadcasting: 5\n" Aug 17 00:00:43.496: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:43.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4761" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:18.545 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":294,"completed":116,"skipped":1997,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:43.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:00:43.681: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48" in namespace "security-context-test-5691" to be "Succeeded or Failed" Aug 17 00:00:43.696: INFO: Pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48": Phase="Pending", Reason="", readiness=false. Elapsed: 15.263878ms Aug 17 00:00:45.805: INFO: Pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124429686s Aug 17 00:00:47.810: INFO: Pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129179467s Aug 17 00:00:47.810: INFO: Pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48" satisfied condition "Succeeded or Failed" Aug 17 00:00:47.816: INFO: Got logs for pod "busybox-privileged-false-15069fd1-44bc-40b1-b3da-60177156ea48": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5691" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":117,"skipped":2014,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:47.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 00:00:47.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff" in namespace "downward-api-7026" to be "Succeeded or Failed" Aug 17 00:00:47.918: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944937ms Aug 17 00:00:49.962: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046270117s Aug 17 00:00:51.966: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050227352s Aug 17 00:00:54.093: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff": Phase="Running", Reason="", readiness=true. Elapsed: 6.177972081s Aug 17 00:00:56.097: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181827992s STEP: Saw pod success Aug 17 00:00:56.097: INFO: Pod "downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff" satisfied condition "Succeeded or Failed" Aug 17 00:00:56.101: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff container client-container: STEP: delete the pod Aug 17 00:00:56.124: INFO: Waiting for pod downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff to disappear Aug 17 00:00:56.155: INFO: Pod downwardapi-volume-84836436-248b-49fa-84af-1ed541ec0aff no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:00:56.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7026" for this suite. • [SLOW TEST:8.342 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":118,"skipped":2033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:00:56.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 00:01:00.907: INFO: Successfully updated pod "labelsupdatef14d3e7b-f1ca-4f65-99ec-d739685b2963" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:01:03.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1709" for this suite. • [SLOW TEST:7.000 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":119,"skipped":2083,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:01:03.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:01:03.429: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 17 00:01:03.447: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:03.464: INFO: Number of nodes with available pods: 0 Aug 17 00:01:03.465: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:04.469: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:04.472: INFO: Number of nodes with available pods: 0 Aug 17 00:01:04.472: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:05.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:05.757: INFO: Number of nodes with available pods: 0 Aug 17 00:01:05.757: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:06.469: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:06.472: INFO: Number of nodes with available pods: 0 Aug 17 00:01:06.472: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:07.469: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:07.477: INFO: Number of nodes with available pods: 2 Aug 17 00:01:07.477: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 17 00:01:07.530: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:07.530: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:07.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:08.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:08.552: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:08.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:09.945: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:09.945: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:09.950: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:10.732: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:10.732: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:10.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:11.802: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:11.802: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:11.806: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:12.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:12.552: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:12.552: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:12.555: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:13.550: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:13.550: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:13.550: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:13.552: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:14.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:14.551: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:14.551: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:14.554: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:15.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:15.551: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:15.551: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:15.555: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:16.555: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:16.555: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:16.555: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:16.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:17.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:17.552: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:17.552: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:17.555: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:18.675: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:18.675: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:18.675: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:18.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:19.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:19.552: INFO: Wrong image for pod: daemon-set-wwcjb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:19.552: INFO: Pod daemon-set-wwcjb is not available Aug 17 00:01:19.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:20.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:20.552: INFO: Pod daemon-set-tjtnk is not available Aug 17 00:01:20.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:21.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:21.552: INFO: Pod daemon-set-tjtnk is not available Aug 17 00:01:21.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:22.705: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:22.705: INFO: Pod daemon-set-tjtnk is not available Aug 17 00:01:22.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:23.703: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:23.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:24.716: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:24.721: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:25.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:25.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:26.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:26.551: INFO: Pod daemon-set-lhfnq is not available Aug 17 00:01:26.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:27.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:27.551: INFO: Pod daemon-set-lhfnq is not available Aug 17 00:01:27.554: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:28.552: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:28.552: INFO: Pod daemon-set-lhfnq is not available Aug 17 00:01:28.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:29.551: INFO: Wrong image for pod: daemon-set-lhfnq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 00:01:29.551: INFO: Pod daemon-set-lhfnq is not available Aug 17 00:01:29.555: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:30.552: INFO: Pod daemon-set-fk686 is not available Aug 17 00:01:30.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 17 00:01:30.561: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:30.564: INFO: Number of nodes with available pods: 1 Aug 17 00:01:30.564: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:31.591: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:31.594: INFO: Number of nodes with available pods: 1 Aug 17 00:01:31.594: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:32.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:32.574: INFO: Number of nodes with available pods: 1 Aug 17 00:01:32.574: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:01:33.585: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:01:33.589: INFO: Number of nodes with available pods: 2 Aug 17 00:01:33.589: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7873, will wait for the garbage collector to delete the pods Aug 17 00:01:33.665: INFO: Deleting DaemonSet.extensions daemon-set took: 9.190168ms Aug 17 00:01:34.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.269713ms Aug 17 00:01:40.069: INFO: Number of nodes with available pods: 0 Aug 17 00:01:40.069: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 00:01:40.072: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7873/daemonsets","resourceVersion":"539592"},"items":null} Aug 17 00:01:40.074: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7873/pods","resourceVersion":"539592"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:01:40.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7873" for this suite. • [SLOW TEST:36.924 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":294,"completed":120,"skipped":2098,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:01:40.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-e0cf2aee-d1a2-4324-b149-0db12456d809 in namespace container-probe-5207 Aug 17 00:01:46.327: INFO: Started pod test-webserver-e0cf2aee-d1a2-4324-b149-0db12456d809 in namespace container-probe-5207 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 00:01:46.331: INFO: Initial restart count of pod test-webserver-e0cf2aee-d1a2-4324-b149-0db12456d809 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:05:47.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5207" for this suite. • [SLOW TEST:247.967 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":121,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:05:48.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 00:05:48.573: INFO: starting watch STEP: patching STEP: updating Aug 17 00:05:48.606: INFO: waiting for watch events with expected annotations Aug 17 00:05:48.606: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:05:48.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-8098" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":294,"completed":122,"skipped":2142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:05:48.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:06:27.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2103" for this suite. • [SLOW TEST:39.302 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":294,"completed":123,"skipped":2168,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:06:28.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-78779413-0c8e-4beb-86fd-039180a90328 STEP: Creating a pod to test consume configMaps Aug 17 00:06:28.300: INFO: Waiting up to 5m0s for pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9" in namespace "configmap-5636" to be "Succeeded or Failed" Aug 17 00:06:28.386: INFO: Pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9": Phase="Pending", Reason="", readiness=false. Elapsed: 85.95459ms Aug 17 00:06:30.391: INFO: Pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091244852s Aug 17 00:06:32.445: INFO: Pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145515988s Aug 17 00:06:34.448: INFO: Pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148409979s STEP: Saw pod success Aug 17 00:06:34.448: INFO: Pod "pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9" satisfied condition "Succeeded or Failed" Aug 17 00:06:34.491: INFO: Trying to get logs from node latest-worker pod pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9 container configmap-volume-test: STEP: delete the pod Aug 17 00:06:34.601: INFO: Waiting for pod pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9 to disappear Aug 17 00:06:34.625: INFO: Pod pod-configmaps-be2ef2a2-785e-46cd-88cc-3e7cd93b29d9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:06:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5636" for this suite. • [SLOW TEST:6.561 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":124,"skipped":2171,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:06:34.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:06:34.719: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 17 00:06:37.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9676 create -f -' Aug 17 00:06:41.002: INFO: stderr: "" Aug 17 00:06:41.002: INFO: stdout: "e2e-test-crd-publish-openapi-852-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 17 00:06:41.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9676 delete e2e-test-crd-publish-openapi-852-crds test-cr' Aug 17 00:06:41.117: INFO: stderr: "" Aug 17 00:06:41.117: INFO: stdout: "e2e-test-crd-publish-openapi-852-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 17 00:06:41.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9676 apply -f -' Aug 17 00:06:41.423: INFO: stderr: "" Aug 17 00:06:41.423: INFO: stdout: "e2e-test-crd-publish-openapi-852-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 17 00:06:41.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9676 delete e2e-test-crd-publish-openapi-852-crds test-cr' Aug 17 00:06:41.530: INFO: stderr: "" Aug 17 00:06:41.530: INFO: stdout: "e2e-test-crd-publish-openapi-852-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 17 00:06:41.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-852-crds' Aug 17 00:06:41.827: INFO: stderr: "" Aug 17 00:06:41.827: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-852-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:06:44.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9676" for this suite. • [SLOW TEST:10.176 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":294,"completed":125,"skipped":2171,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:06:44.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 17 00:06:46.455: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 17 00:06:48.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219606, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219606, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219606, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219606, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:06:51.501: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:06:51.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:06:52.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5769" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.125 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":294,"completed":126,"skipped":2173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:06:52.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 17 00:06:53.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config api-versions' Aug 17 00:06:53.290: INFO: stderr: "" Aug 17 00:06:53.290: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:06:53.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8836" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":294,"completed":127,"skipped":2216,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:06:53.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 17 00:06:59.606: INFO: &Pod{ObjectMeta:{send-events-e8744b87-df0a-4c73-b314-527567a3be52 events-238 /api/v1/namespaces/events-238/pods/send-events-e8744b87-df0a-4c73-b314-527567a3be52 bf5529ec-0520-4a78-b627-952f8d96a9dd 540758 0 2020-08-17 00:06:53 +0000 UTC map[name:foo time:358176196] map[] [] [] [{e2e.test Update v1 2020-08-17 00:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 00:06:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m5t6k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m5t6k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m5t6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:06:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:06:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.16,StartTime:2020-08-17 00:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 00:06:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://cd03b9136d9e0cc9f39ed5057983119aae20a0250d20e9847f27b957b56cf353,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 17 00:07:01.610: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 17 00:07:03.693: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:07:03.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-238" for this suite. • [SLOW TEST:10.620 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":294,"completed":128,"skipped":2228,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:07:03.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 00:07:04.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1662' Aug 17 00:07:04.258: INFO: stderr: "" Aug 17 00:07:04.258: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 17 00:07:04.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-1662' Aug 17 00:07:04.492: INFO: stderr: "" Aug 17 00:07:04.492: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-17T00:07:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-17T00:07:04Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1662\",\n \"resourceVersion\": \"540801\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1662/pods/e2e-test-httpd-pod\",\n \"uid\": \"31756783-c73e-4508-a8d3-7549130c1c0b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gxdbg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gxdbg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gxdbg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T00:07:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Aug 17 00:07:04.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-1662' Aug 17 00:07:05.527: INFO: stderr: "W0817 00:07:04.566642 1706 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 17 00:07:05.527: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 17 00:07:05.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1662' Aug 17 00:07:08.414: INFO: stderr: "" Aug 17 00:07:08.414: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:07:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1662" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":294,"completed":129,"skipped":2233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:07:08.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:07:09.199: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:07:11.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219629, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219629, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219629, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219629, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:07:14.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:07:24.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2294" for this suite. STEP: Destroying namespace "webhook-2294-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":294,"completed":130,"skipped":2265,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:07:24.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:07:25.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:07:28.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-7bc8486f8c\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Aug 17 00:07:30.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219648, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:07:32.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219648, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:07:34.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219648, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219645, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:07:38.095: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:07:47.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9695" for this suite. STEP: Destroying namespace "webhook-9695-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:29.579 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":294,"completed":131,"skipped":2279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:07:54.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1328 STEP: creating the pod Aug 17 00:07:55.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7785' Aug 17 00:07:56.337: INFO: stderr: "" Aug 17 00:07:56.337: INFO: stdout: "pod/pause created\n" Aug 17 00:07:56.337: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 17 00:07:56.337: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7785" to be "running and ready" Aug 17 00:07:56.497: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 159.480429ms Aug 17 00:07:58.621: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283442093s Aug 17 00:08:00.624: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287225407s Aug 17 00:08:02.628: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.290753071s Aug 17 00:08:02.628: INFO: Pod "pause" satisfied condition "running and ready" Aug 17 00:08:02.628: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 17 00:08:02.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7785' Aug 17 00:08:02.729: INFO: stderr: "" Aug 17 00:08:02.729: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 17 00:08:02.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7785' Aug 17 00:08:02.825: INFO: stderr: "" Aug 17 00:08:02.825: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 17 00:08:02.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7785' Aug 17 00:08:03.394: INFO: stderr: "" Aug 17 00:08:03.394: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 17 00:08:03.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7785' Aug 17 00:08:03.546: INFO: stderr: "" Aug 17 00:08:03.546: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1335 STEP: using delete to clean up resources Aug 17 00:08:03.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7785' Aug 17 00:08:03.730: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:08:03.730: INFO: stdout: "pod \"pause\" force deleted\n" Aug 17 00:08:03.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7785' Aug 17 00:08:03.825: INFO: stderr: "No resources found in kubectl-7785 namespace.\n" Aug 17 00:08:03.825: INFO: stdout: "" Aug 17 00:08:03.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7785 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 00:08:03.913: INFO: stderr: "" Aug 17 00:08:03.913: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:03.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7785" for this suite. • [SLOW TEST:9.796 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1325 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":294,"completed":132,"skipped":2309,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:03.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-8076cae6-3d1b-43a7-99a3-42dad93c106d STEP: Creating a pod to test consume secrets Aug 17 00:08:05.149: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad" in namespace "projected-7963" to be "Succeeded or Failed" Aug 17 00:08:05.220: INFO: Pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad": Phase="Pending", Reason="", readiness=false. Elapsed: 70.833261ms Aug 17 00:08:07.441: INFO: Pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292514307s Aug 17 00:08:09.452: INFO: Pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302986436s Aug 17 00:08:11.456: INFO: Pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.307109612s STEP: Saw pod success Aug 17 00:08:11.456: INFO: Pod "pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad" satisfied condition "Succeeded or Failed" Aug 17 00:08:11.459: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad container projected-secret-volume-test: STEP: delete the pod Aug 17 00:08:11.562: INFO: Waiting for pod pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad to disappear Aug 17 00:08:11.602: INFO: Pod pod-projected-secrets-243cbe51-a3a8-4520-80dd-d9acb1e1bcad no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:11.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7963" for this suite. • [SLOW TEST:7.691 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":133,"skipped":2312,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:11.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 17 00:08:12.142: INFO: Waiting up to 5m0s for pod "var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8" in namespace "var-expansion-1437" to be "Succeeded or Failed" Aug 17 00:08:12.202: INFO: Pod "var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 59.759683ms Aug 17 00:08:14.206: INFO: Pod "var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063788751s Aug 17 00:08:16.209: INFO: Pod "var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067006224s STEP: Saw pod success Aug 17 00:08:16.209: INFO: Pod "var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8" satisfied condition "Succeeded or Failed" Aug 17 00:08:16.211: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8 container dapi-container: STEP: delete the pod Aug 17 00:08:16.240: INFO: Waiting for pod var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8 to disappear Aug 17 00:08:16.464: INFO: Pod var-expansion-bec75374-5023-4265-81bc-1aedd49db5a8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:16.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1437" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":294,"completed":134,"skipped":2322,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:16.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 17 00:08:16.558: INFO: Waiting up to 5m0s for pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da" in namespace "emptydir-9157" to be "Succeeded or Failed" Aug 17 00:08:16.596: INFO: Pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da": Phase="Pending", Reason="", readiness=false. Elapsed: 38.177559ms Aug 17 00:08:18.777: INFO: Pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218957885s Aug 17 00:08:20.780: INFO: Pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222201996s Aug 17 00:08:22.785: INFO: Pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226664257s STEP: Saw pod success Aug 17 00:08:22.785: INFO: Pod "pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da" satisfied condition "Succeeded or Failed" Aug 17 00:08:22.788: INFO: Trying to get logs from node latest-worker pod pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da container test-container: STEP: delete the pod Aug 17 00:08:23.442: INFO: Waiting for pod pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da to disappear Aug 17 00:08:23.477: INFO: Pod pod-234b2d7f-9b6b-42e1-b0dd-f4cc84c201da no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:23.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9157" for this suite. • [SLOW TEST:7.169 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":135,"skipped":2324,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:23.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 17 00:08:30.617: INFO: Successfully updated pod "pod-update-6153b99e-f52e-4874-b7d4-20e0951914c3" STEP: verifying the updated pod is in kubernetes Aug 17 00:08:30.639: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:30.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8369" for this suite. • [SLOW TEST:6.984 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":294,"completed":136,"skipped":2326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:30.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 00:08:30.850: INFO: Waiting up to 5m0s for pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84" in namespace "downward-api-9919" to be "Succeeded or Failed" Aug 17 00:08:30.854: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469764ms Aug 17 00:08:32.857: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006735588s Aug 17 00:08:34.927: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076397466s Aug 17 00:08:37.017: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84": Phase="Running", Reason="", readiness=true. Elapsed: 6.166586505s Aug 17 00:08:39.026: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175504975s STEP: Saw pod success Aug 17 00:08:39.026: INFO: Pod "downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84" satisfied condition "Succeeded or Failed" Aug 17 00:08:39.034: INFO: Trying to get logs from node latest-worker pod downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84 container dapi-container: STEP: delete the pod Aug 17 00:08:39.289: INFO: Waiting for pod downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84 to disappear Aug 17 00:08:39.489: INFO: Pod downward-api-aa15e49e-71fc-49f7-953f-3e43fa85fa84 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:39.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9919" for this suite. • [SLOW TEST:8.853 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":294,"completed":137,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:39.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:08:42.903: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:08:45.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219723, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:08:47.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219723, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:08:49.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219723, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733219722, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:08:52.993: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:08:52.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3371" for this suite. STEP: Destroying namespace "webhook-3371-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.600 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":294,"completed":138,"skipped":2381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:08:55.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 17 00:08:58.415: INFO: Pod name wrapped-volume-race-9a4c3a2a-914b-4da5-8ce1-b8b90e9a9e13: Found 0 pods out of 5 Aug 17 00:09:03.546: INFO: Pod name wrapped-volume-race-9a4c3a2a-914b-4da5-8ce1-b8b90e9a9e13: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9a4c3a2a-914b-4da5-8ce1-b8b90e9a9e13 in namespace emptydir-wrapper-4973, will wait for the garbage collector to delete the pods Aug 17 00:09:21.987: INFO: Deleting ReplicationController wrapped-volume-race-9a4c3a2a-914b-4da5-8ce1-b8b90e9a9e13 took: 7.597671ms Aug 17 00:09:22.787: INFO: Terminating ReplicationController wrapped-volume-race-9a4c3a2a-914b-4da5-8ce1-b8b90e9a9e13 pods took: 800.276782ms STEP: Creating RC which spawns configmap-volume pods Aug 17 00:09:31.130: INFO: Pod name wrapped-volume-race-7affc110-0189-4d91-be34-bdf8ed61a62a: Found 0 pods out of 5 Aug 17 00:09:36.138: INFO: Pod name wrapped-volume-race-7affc110-0189-4d91-be34-bdf8ed61a62a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7affc110-0189-4d91-be34-bdf8ed61a62a in namespace emptydir-wrapper-4973, will wait for the garbage collector to delete the pods Aug 17 00:09:58.338: INFO: Deleting ReplicationController wrapped-volume-race-7affc110-0189-4d91-be34-bdf8ed61a62a took: 112.929014ms Aug 17 00:09:59.838: INFO: Terminating ReplicationController wrapped-volume-race-7affc110-0189-4d91-be34-bdf8ed61a62a pods took: 1.500232754s STEP: Creating RC which spawns configmap-volume pods Aug 17 00:10:20.029: INFO: Pod name wrapped-volume-race-3fdc2d7c-aff5-413b-b954-47e4674c3a99: Found 0 pods out of 5 Aug 17 00:10:25.340: INFO: Pod name wrapped-volume-race-3fdc2d7c-aff5-413b-b954-47e4674c3a99: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3fdc2d7c-aff5-413b-b954-47e4674c3a99 in namespace emptydir-wrapper-4973, will wait for the garbage collector to delete the pods Aug 17 00:10:42.326: INFO: Deleting ReplicationController wrapped-volume-race-3fdc2d7c-aff5-413b-b954-47e4674c3a99 took: 22.075551ms Aug 17 00:10:42.826: INFO: Terminating ReplicationController wrapped-volume-race-3fdc2d7c-aff5-413b-b954-47e4674c3a99 pods took: 500.266087ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:11:01.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4973" for this suite. • [SLOW TEST:126.499 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":294,"completed":139,"skipped":2407,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:11:01.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7016 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7016 STEP: creating replication controller externalsvc in namespace services-7016 I0817 00:11:01.911145 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7016, replica count: 2 I0817 00:11:04.961636 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:11:07.961838 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:11:10.961970 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:11:13.962161 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 17 00:11:14.733: INFO: Creating new exec pod Aug 17 00:11:21.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7016 execpodtrb6d -- /bin/sh -x -c nslookup nodeport-service.services-7016.svc.cluster.local' Aug 17 00:11:21.290: INFO: stderr: "I0817 00:11:21.193351 1883 log.go:181] (0xc000d06630) (0xc0002fa320) Create stream\nI0817 00:11:21.193397 1883 log.go:181] (0xc000d06630) (0xc0002fa320) Stream added, broadcasting: 1\nI0817 00:11:21.195912 1883 log.go:181] (0xc000d06630) Reply frame received for 1\nI0817 00:11:21.195945 1883 log.go:181] (0xc000d06630) (0xc0001972c0) Create stream\nI0817 00:11:21.195953 1883 log.go:181] (0xc000d06630) (0xc0001972c0) Stream added, broadcasting: 3\nI0817 00:11:21.196627 1883 log.go:181] (0xc000d06630) Reply frame received for 3\nI0817 00:11:21.196654 1883 log.go:181] (0xc000d06630) (0xc000a3b180) Create stream\nI0817 00:11:21.196668 1883 log.go:181] (0xc000d06630) (0xc000a3b180) Stream added, broadcasting: 5\nI0817 00:11:21.197283 1883 log.go:181] (0xc000d06630) Reply frame received for 5\nI0817 00:11:21.278288 1883 log.go:181] (0xc000d06630) Data frame received for 5\nI0817 00:11:21.278311 1883 log.go:181] (0xc000a3b180) (5) Data frame handling\nI0817 00:11:21.278322 1883 log.go:181] (0xc000a3b180) (5) Data frame sent\n+ nslookup nodeport-service.services-7016.svc.cluster.local\nI0817 00:11:21.281914 1883 log.go:181] (0xc000d06630) Data frame received for 3\nI0817 00:11:21.281935 1883 log.go:181] (0xc0001972c0) (3) Data frame handling\nI0817 00:11:21.281948 1883 log.go:181] (0xc0001972c0) (3) Data frame sent\nI0817 00:11:21.282661 1883 log.go:181] (0xc000d06630) Data frame received for 3\nI0817 00:11:21.282674 1883 log.go:181] (0xc0001972c0) (3) Data frame handling\nI0817 00:11:21.282687 1883 log.go:181] (0xc0001972c0) (3) Data frame sent\nI0817 00:11:21.283043 1883 log.go:181] (0xc000d06630) Data frame received for 3\nI0817 00:11:21.283068 1883 log.go:181] (0xc0001972c0) (3) Data frame handling\nI0817 00:11:21.283117 1883 log.go:181] (0xc000d06630) Data frame received for 5\nI0817 00:11:21.283130 1883 log.go:181] (0xc000a3b180) (5) Data frame handling\nI0817 00:11:21.284285 1883 log.go:181] (0xc000d06630) Data frame received for 1\nI0817 00:11:21.284298 1883 log.go:181] (0xc0002fa320) (1) Data frame handling\nI0817 00:11:21.284304 1883 log.go:181] (0xc0002fa320) (1) Data frame sent\nI0817 00:11:21.284311 1883 log.go:181] (0xc000d06630) (0xc0002fa320) Stream removed, broadcasting: 1\nI0817 00:11:21.284324 1883 log.go:181] (0xc000d06630) Go away received\nI0817 00:11:21.284555 1883 log.go:181] (0xc000d06630) (0xc0002fa320) Stream removed, broadcasting: 1\nI0817 00:11:21.284565 1883 log.go:181] (0xc000d06630) (0xc0001972c0) Stream removed, broadcasting: 3\nI0817 00:11:21.284572 1883 log.go:181] (0xc000d06630) (0xc000a3b180) Stream removed, broadcasting: 5\n" Aug 17 00:11:21.290: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7016.svc.cluster.local\tcanonical name = externalsvc.services-7016.svc.cluster.local.\nName:\texternalsvc.services-7016.svc.cluster.local\nAddress: 10.110.237.217\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7016, will wait for the garbage collector to delete the pods Aug 17 00:11:21.358: INFO: Deleting ReplicationController externalsvc took: 14.505209ms Aug 17 00:11:21.758: INFO: Terminating ReplicationController externalsvc pods took: 400.172582ms Aug 17 00:11:27.504: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:11:27.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7016" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:26.272 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":294,"completed":140,"skipped":2407,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:11:27.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 00:11:28.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b" in namespace "projected-5845" to be "Succeeded or Failed" Aug 17 00:11:28.714: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 224.031524ms Aug 17 00:11:30.869: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379414014s Aug 17 00:11:33.031: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540786006s Aug 17 00:11:35.361: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.870468546s Aug 17 00:11:37.449: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.95932716s STEP: Saw pod success Aug 17 00:11:37.449: INFO: Pod "downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b" satisfied condition "Succeeded or Failed" Aug 17 00:11:37.451: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b container client-container: STEP: delete the pod Aug 17 00:11:37.977: INFO: Waiting for pod downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b to disappear Aug 17 00:11:38.091: INFO: Pod downwardapi-volume-df38678a-c2ee-4868-93a2-b43f5b4f6e9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:11:38.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5845" for this suite. • [SLOW TEST:10.227 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":294,"completed":141,"skipped":2416,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:11:38.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6488 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 00:11:38.665: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 00:11:40.122: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:11:43.379: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:11:44.367: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:11:46.355: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:11:48.171: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:11:50.126: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:11:52.126: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:11:54.126: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:11:56.211: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:11:58.126: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:12:00.127: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:12:02.126: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 00:12:02.133: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 00:12:08.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.29:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6488 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:12:08.282: INFO: >>> kubeConfig: /root/.kube/config I0817 00:12:08.312288 7 log.go:181] (0xc0033833f0) (0xc000543900) Create stream I0817 00:12:08.312314 7 log.go:181] (0xc0033833f0) (0xc000543900) Stream added, broadcasting: 1 I0817 00:12:08.313627 7 log.go:181] (0xc0033833f0) Reply frame received for 1 I0817 00:12:08.313662 7 log.go:181] (0xc0033833f0) (0xc003249860) Create stream I0817 00:12:08.313674 7 log.go:181] (0xc0033833f0) (0xc003249860) Stream added, broadcasting: 3 I0817 00:12:08.314373 7 log.go:181] (0xc0033833f0) Reply frame received for 3 I0817 00:12:08.314391 7 log.go:181] (0xc0033833f0) (0xc001433ea0) Create stream I0817 00:12:08.314400 7 log.go:181] (0xc0033833f0) (0xc001433ea0) Stream added, broadcasting: 5 I0817 00:12:08.315040 7 log.go:181] (0xc0033833f0) Reply frame received for 5 I0817 00:12:08.378336 7 log.go:181] (0xc0033833f0) Data frame received for 3 I0817 00:12:08.378372 7 log.go:181] (0xc003249860) (3) Data frame handling I0817 00:12:08.378391 7 log.go:181] (0xc003249860) (3) Data frame sent I0817 00:12:08.378410 7 log.go:181] (0xc0033833f0) Data frame received for 3 I0817 00:12:08.378427 7 log.go:181] (0xc003249860) (3) Data frame handling I0817 00:12:08.378471 7 log.go:181] (0xc0033833f0) Data frame received for 5 I0817 00:12:08.378486 7 log.go:181] (0xc001433ea0) (5) Data frame handling I0817 00:12:08.379935 7 log.go:181] (0xc0033833f0) Data frame received for 1 I0817 00:12:08.379962 7 log.go:181] (0xc000543900) (1) Data frame handling I0817 00:12:08.379971 7 log.go:181] (0xc000543900) (1) Data frame sent I0817 00:12:08.379982 7 log.go:181] (0xc0033833f0) (0xc000543900) Stream removed, broadcasting: 1 I0817 00:12:08.379997 7 log.go:181] (0xc0033833f0) Go away received I0817 00:12:08.380082 7 log.go:181] (0xc0033833f0) (0xc000543900) Stream removed, broadcasting: 1 I0817 00:12:08.380098 7 log.go:181] (0xc0033833f0) (0xc003249860) Stream removed, broadcasting: 3 I0817 00:12:08.380105 7 log.go:181] (0xc0033833f0) (0xc001433ea0) Stream removed, broadcasting: 5 Aug 17 00:12:08.380: INFO: Found all expected endpoints: [netserver-0] Aug 17 00:12:08.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.21:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6488 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:12:08.382: INFO: >>> kubeConfig: /root/.kube/config I0817 00:12:08.410172 7 log.go:181] (0xc0068e8370) (0xc001f423c0) Create stream I0817 00:12:08.410200 7 log.go:181] (0xc0068e8370) (0xc001f423c0) Stream added, broadcasting: 1 I0817 00:12:08.411446 7 log.go:181] (0xc0068e8370) Reply frame received for 1 I0817 00:12:08.411469 7 log.go:181] (0xc0068e8370) (0xc000543d60) Create stream I0817 00:12:08.411476 7 log.go:181] (0xc0068e8370) (0xc000543d60) Stream added, broadcasting: 3 I0817 00:12:08.412219 7 log.go:181] (0xc0068e8370) Reply frame received for 3 I0817 00:12:08.412262 7 log.go:181] (0xc0068e8370) (0xc0032e0000) Create stream I0817 00:12:08.412275 7 log.go:181] (0xc0068e8370) (0xc0032e0000) Stream added, broadcasting: 5 I0817 00:12:08.413014 7 log.go:181] (0xc0068e8370) Reply frame received for 5 I0817 00:12:08.471223 7 log.go:181] (0xc0068e8370) Data frame received for 3 I0817 00:12:08.471251 7 log.go:181] (0xc000543d60) (3) Data frame handling I0817 00:12:08.471263 7 log.go:181] (0xc000543d60) (3) Data frame sent I0817 00:12:08.471276 7 log.go:181] (0xc0068e8370) Data frame received for 3 I0817 00:12:08.471287 7 log.go:181] (0xc000543d60) (3) Data frame handling I0817 00:12:08.471302 7 log.go:181] (0xc0068e8370) Data frame received for 5 I0817 00:12:08.471313 7 log.go:181] (0xc0032e0000) (5) Data frame handling I0817 00:12:08.472553 7 log.go:181] (0xc0068e8370) Data frame received for 1 I0817 00:12:08.472568 7 log.go:181] (0xc001f423c0) (1) Data frame handling I0817 00:12:08.472579 7 log.go:181] (0xc001f423c0) (1) Data frame sent I0817 00:12:08.472591 7 log.go:181] (0xc0068e8370) (0xc001f423c0) Stream removed, broadcasting: 1 I0817 00:12:08.472662 7 log.go:181] (0xc0068e8370) (0xc001f423c0) Stream removed, broadcasting: 1 I0817 00:12:08.472673 7 log.go:181] (0xc0068e8370) (0xc000543d60) Stream removed, broadcasting: 3 I0817 00:12:08.472810 7 log.go:181] (0xc0068e8370) (0xc0032e0000) Stream removed, broadcasting: 5 I0817 00:12:08.472885 7 log.go:181] (0xc0068e8370) Go away received Aug 17 00:12:08.472: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:12:08.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6488" for this suite. • [SLOW TEST:30.380 seconds] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":142,"skipped":2416,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:12:08.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:12:24.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1547" for this suite. • [SLOW TEST:17.064 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":294,"completed":143,"skipped":2422,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:12:25.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 17 00:12:40.250: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:40.295: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 00:12:42.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:42.319: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 00:12:44.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:44.409: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 00:12:46.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:46.308: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 00:12:48.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:48.769: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 00:12:50.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 00:12:50.320: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:12:50.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7489" for this suite. • [SLOW TEST:24.793 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":294,"completed":144,"skipped":2443,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:12:50.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6f3a9bd6-007d-42b1-98a6-995bba531599 STEP: Creating a pod to test consume configMaps Aug 17 00:12:50.677: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b" in namespace "configmap-1356" to be "Succeeded or Failed" Aug 17 00:12:50.746: INFO: Pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.040137ms Aug 17 00:12:52.751: INFO: Pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073729814s Aug 17 00:12:54.754: INFO: Pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077332012s Aug 17 00:12:56.883: INFO: Pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205556182s STEP: Saw pod success Aug 17 00:12:56.883: INFO: Pod "pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b" satisfied condition "Succeeded or Failed" Aug 17 00:12:57.181: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b container configmap-volume-test: STEP: delete the pod Aug 17 00:12:57.518: INFO: Waiting for pod pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b to disappear Aug 17 00:12:57.675: INFO: Pod pod-configmaps-f8cf02ef-5e22-458a-84d8-7c5db12d672b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:12:57.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1356" for this suite. • [SLOW TEST:7.349 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":145,"skipped":2458,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:12:57.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 17 00:12:58.146: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:12:58.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8279" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":294,"completed":146,"skipped":2479,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:12:58.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 17 00:12:59.794: INFO: created pod pod-service-account-defaultsa Aug 17 00:12:59.794: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 17 00:13:00.026: INFO: created pod pod-service-account-mountsa Aug 17 00:13:00.027: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 17 00:13:00.031: INFO: created pod pod-service-account-nomountsa Aug 17 00:13:00.031: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 17 00:13:00.087: INFO: created pod pod-service-account-defaultsa-mountspec Aug 17 00:13:00.087: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 17 00:13:00.275: INFO: created pod pod-service-account-mountsa-mountspec Aug 17 00:13:00.275: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 17 00:13:00.613: INFO: created pod pod-service-account-nomountsa-mountspec Aug 17 00:13:00.613: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 17 00:13:00.751: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 17 00:13:00.751: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 17 00:13:01.177: INFO: created pod pod-service-account-mountsa-nomountspec Aug 17 00:13:01.177: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 17 00:13:01.798: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 17 00:13:01.798: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:13:01.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1794" for this suite. • [SLOW TEST:5.476 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":294,"completed":147,"skipped":2480,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:13:04.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7905 STEP: creating service affinity-clusterip-transition in namespace services-7905 STEP: creating replication controller affinity-clusterip-transition in namespace services-7905 I0817 00:13:06.650370 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7905, replica count: 3 I0817 00:13:09.700888 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:13:12.701126 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:13:15.701346 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:13:18.701643 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:13:21.701830 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 00:13:21.730: INFO: Creating new exec pod Aug 17 00:13:33.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7905 execpod-affinityfz4vz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 17 00:13:33.881: INFO: stderr: "I0817 00:13:33.811498 1919 log.go:181] (0xc000e0ac60) (0xc000b161e0) Create stream\nI0817 00:13:33.811550 1919 log.go:181] (0xc000e0ac60) (0xc000b161e0) Stream added, broadcasting: 1\nI0817 00:13:33.814721 1919 log.go:181] (0xc000e0ac60) Reply frame received for 1\nI0817 00:13:33.814761 1919 log.go:181] (0xc000e0ac60) (0xc000b0d180) Create stream\nI0817 00:13:33.814774 1919 log.go:181] (0xc000e0ac60) (0xc000b0d180) Stream added, broadcasting: 3\nI0817 00:13:33.815607 1919 log.go:181] (0xc000e0ac60) Reply frame received for 3\nI0817 00:13:33.815630 1919 log.go:181] (0xc000e0ac60) (0xc000a1a960) Create stream\nI0817 00:13:33.815638 1919 log.go:181] (0xc000e0ac60) (0xc000a1a960) Stream added, broadcasting: 5\nI0817 00:13:33.816352 1919 log.go:181] (0xc000e0ac60) Reply frame received for 5\nI0817 00:13:33.873766 1919 log.go:181] (0xc000e0ac60) Data frame received for 5\nI0817 00:13:33.873800 1919 log.go:181] (0xc000a1a960) (5) Data frame handling\nI0817 00:13:33.873829 1919 log.go:181] (0xc000a1a960) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0817 00:13:33.873848 1919 log.go:181] (0xc000e0ac60) Data frame received for 3\nI0817 00:13:33.873859 1919 log.go:181] (0xc000b0d180) (3) Data frame handling\nI0817 00:13:33.873871 1919 log.go:181] (0xc000e0ac60) Data frame received for 5\nI0817 00:13:33.873890 1919 log.go:181] (0xc000a1a960) (5) Data frame handling\nI0817 00:13:33.873912 1919 log.go:181] (0xc000a1a960) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0817 00:13:33.874143 1919 log.go:181] (0xc000e0ac60) Data frame received for 5\nI0817 00:13:33.874154 1919 log.go:181] (0xc000a1a960) (5) Data frame handling\nI0817 00:13:33.875716 1919 log.go:181] (0xc000e0ac60) Data frame received for 1\nI0817 00:13:33.875731 1919 log.go:181] (0xc000b161e0) (1) Data frame handling\nI0817 00:13:33.875742 1919 log.go:181] (0xc000b161e0) (1) Data frame sent\nI0817 00:13:33.875761 1919 log.go:181] (0xc000e0ac60) (0xc000b161e0) Stream removed, broadcasting: 1\nI0817 00:13:33.875978 1919 log.go:181] (0xc000e0ac60) Go away received\nI0817 00:13:33.876071 1919 log.go:181] (0xc000e0ac60) (0xc000b161e0) Stream removed, broadcasting: 1\nI0817 00:13:33.876085 1919 log.go:181] (0xc000e0ac60) (0xc000b0d180) Stream removed, broadcasting: 3\nI0817 00:13:33.876092 1919 log.go:181] (0xc000e0ac60) (0xc000a1a960) Stream removed, broadcasting: 5\n" Aug 17 00:13:33.881: INFO: stdout: "" Aug 17 00:13:33.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7905 execpod-affinityfz4vz -- /bin/sh -x -c nc -zv -t -w 2 10.96.135.71 80' Aug 17 00:13:34.088: INFO: stderr: "I0817 00:13:34.017609 1937 log.go:181] (0xc0005bc160) (0xc0006a66e0) Create stream\nI0817 00:13:34.017670 1937 log.go:181] (0xc0005bc160) (0xc0006a66e0) Stream added, broadcasting: 1\nI0817 00:13:34.019532 1937 log.go:181] (0xc0005bc160) Reply frame received for 1\nI0817 00:13:34.019576 1937 log.go:181] (0xc0005bc160) (0xc000612320) Create stream\nI0817 00:13:34.019588 1937 log.go:181] (0xc0005bc160) (0xc000612320) Stream added, broadcasting: 3\nI0817 00:13:34.020431 1937 log.go:181] (0xc0005bc160) Reply frame received for 3\nI0817 00:13:34.020471 1937 log.go:181] (0xc0005bc160) (0xc00059f360) Create stream\nI0817 00:13:34.020514 1937 log.go:181] (0xc0005bc160) (0xc00059f360) Stream added, broadcasting: 5\nI0817 00:13:34.021640 1937 log.go:181] (0xc0005bc160) Reply frame received for 5\nI0817 00:13:34.078963 1937 log.go:181] (0xc0005bc160) Data frame received for 3\nI0817 00:13:34.079014 1937 log.go:181] (0xc000612320) (3) Data frame handling\nI0817 00:13:34.079044 1937 log.go:181] (0xc0005bc160) Data frame received for 5\nI0817 00:13:34.079057 1937 log.go:181] (0xc00059f360) (5) Data frame handling\nI0817 00:13:34.079073 1937 log.go:181] (0xc00059f360) (5) Data frame sent\nI0817 00:13:34.079087 1937 log.go:181] (0xc0005bc160) Data frame received for 5\nI0817 00:13:34.079099 1937 log.go:181] (0xc00059f360) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.135.71 80\nConnection to 10.96.135.71 80 port [tcp/http] succeeded!\nI0817 00:13:34.080285 1937 log.go:181] (0xc0005bc160) Data frame received for 1\nI0817 00:13:34.080299 1937 log.go:181] (0xc0006a66e0) (1) Data frame handling\nI0817 00:13:34.080320 1937 log.go:181] (0xc0006a66e0) (1) Data frame sent\nI0817 00:13:34.080375 1937 log.go:181] (0xc0005bc160) (0xc0006a66e0) Stream removed, broadcasting: 1\nI0817 00:13:34.080473 1937 log.go:181] (0xc0005bc160) Go away received\nI0817 00:13:34.080669 1937 log.go:181] (0xc0005bc160) (0xc0006a66e0) Stream removed, broadcasting: 1\nI0817 00:13:34.080682 1937 log.go:181] (0xc0005bc160) (0xc000612320) Stream removed, broadcasting: 3\nI0817 00:13:34.080689 1937 log.go:181] (0xc0005bc160) (0xc00059f360) Stream removed, broadcasting: 5\n" Aug 17 00:13:34.088: INFO: stdout: "" Aug 17 00:13:34.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7905 execpod-affinityfz4vz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.135.71:80/ ; done' Aug 17 00:13:34.577: INFO: stderr: "I0817 00:13:34.407627 1955 log.go:181] (0xc0006471e0) (0xc000b85860) Create stream\nI0817 00:13:34.407672 1955 log.go:181] (0xc0006471e0) (0xc000b85860) Stream added, broadcasting: 1\nI0817 00:13:34.410149 1955 log.go:181] (0xc0006471e0) Reply frame received for 1\nI0817 00:13:34.410213 1955 log.go:181] (0xc0006471e0) (0xc00093dae0) Create stream\nI0817 00:13:34.410228 1955 log.go:181] (0xc0006471e0) (0xc00093dae0) Stream added, broadcasting: 3\nI0817 00:13:34.411098 1955 log.go:181] (0xc0006471e0) Reply frame received for 3\nI0817 00:13:34.411151 1955 log.go:181] (0xc0006471e0) (0xc0004457c0) Create stream\nI0817 00:13:34.411175 1955 log.go:181] (0xc0006471e0) (0xc0004457c0) Stream added, broadcasting: 5\nI0817 00:13:34.411970 1955 log.go:181] (0xc0006471e0) Reply frame received for 5\nI0817 00:13:34.470554 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.470590 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.470603 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.470620 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.470626 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.470634 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.474472 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.474492 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.474499 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.475203 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.475222 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.475228 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.475238 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.475242 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.475247 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.475257 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.475261 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.475273 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.480089 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.480102 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.480117 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.481032 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.481048 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.481062 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.481120 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.481142 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.481167 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.485387 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.485399 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.485411 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.486199 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.486218 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.486232 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.486246 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.486252 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.486263 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.491088 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.491108 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.491129 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.491839 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.491861 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.491871 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/I0817 00:13:34.491884 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.491900 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.491913 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.491925 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.491932 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.491942 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n\nI0817 00:13:34.499025 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.499042 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.499056 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.499554 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.499573 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.499593 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.499613 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.499644 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.499672 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.505738 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.505769 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.505800 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.506237 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.506256 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.506267 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.506281 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.506289 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.506297 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.511077 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.511112 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.511139 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.511760 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.511780 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.511798 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.511821 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.511837 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.511854 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.518255 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.518277 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.518289 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.519090 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.519116 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.519143 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.519157 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.519172 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.519186 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.524977 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.524993 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.525007 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.525690 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.525721 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.525735 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.525755 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.525765 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.525775 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.532896 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.532917 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.532928 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.532934 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.532938 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.532952 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.532974 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.532986 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.532997 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.533007 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.533037 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.533071 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.536382 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.536403 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.536423 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.537095 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.537107 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.537114 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.537125 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.537129 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.537135 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.541177 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.541201 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.541214 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.541927 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.541943 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.541958 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.541965 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.541970 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.541975 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.541995 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.542040 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.542058 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\nI0817 00:13:34.545590 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.545601 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.545607 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.546483 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.546505 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.546520 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.546533 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.546569 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.546586 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.552601 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.552624 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.552640 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.553172 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.553204 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.553226 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.553245 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.553283 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.553319 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.558362 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.558376 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.558383 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.558939 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.558976 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.558990 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.559024 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.559050 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.559069 1955 log.go:181] (0xc0004457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:34.566297 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.566329 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.566351 1955 log.go:181] (0xc00093dae0) (3) Data frame sent\nI0817 00:13:34.567340 1955 log.go:181] (0xc0006471e0) Data frame received for 5\nI0817 00:13:34.567371 1955 log.go:181] (0xc0004457c0) (5) Data frame handling\nI0817 00:13:34.567402 1955 log.go:181] (0xc0006471e0) Data frame received for 3\nI0817 00:13:34.567419 1955 log.go:181] (0xc00093dae0) (3) Data frame handling\nI0817 00:13:34.569029 1955 log.go:181] (0xc0006471e0) Data frame received for 1\nI0817 00:13:34.569054 1955 log.go:181] (0xc000b85860) (1) Data frame handling\nI0817 00:13:34.569089 1955 log.go:181] (0xc000b85860) (1) Data frame sent\nI0817 00:13:34.569113 1955 log.go:181] (0xc0006471e0) (0xc000b85860) Stream removed, broadcasting: 1\nI0817 00:13:34.569414 1955 log.go:181] (0xc0006471e0) Go away received\nI0817 00:13:34.569579 1955 log.go:181] (0xc0006471e0) (0xc000b85860) Stream removed, broadcasting: 1\nI0817 00:13:34.569600 1955 log.go:181] (0xc0006471e0) (0xc00093dae0) Stream removed, broadcasting: 3\nI0817 00:13:34.569612 1955 log.go:181] (0xc0006471e0) (0xc0004457c0) Stream removed, broadcasting: 5\n" Aug 17 00:13:34.578: INFO: stdout: "\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-fmb9g\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-5hjkl\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz" Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-fmb9g Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-5hjkl Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.578: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:34.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7905 execpod-affinityfz4vz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.135.71:80/ ; done' Aug 17 00:13:35.239: INFO: stderr: "I0817 00:13:35.069365 1973 log.go:181] (0xc000c0af20) (0xc0003761e0) Create stream\nI0817 00:13:35.069416 1973 log.go:181] (0xc000c0af20) (0xc0003761e0) Stream added, broadcasting: 1\nI0817 00:13:35.071738 1973 log.go:181] (0xc000c0af20) Reply frame received for 1\nI0817 00:13:35.071796 1973 log.go:181] (0xc000c0af20) (0xc0004448c0) Create stream\nI0817 00:13:35.071817 1973 log.go:181] (0xc000c0af20) (0xc0004448c0) Stream added, broadcasting: 3\nI0817 00:13:35.073567 1973 log.go:181] (0xc000c0af20) Reply frame received for 3\nI0817 00:13:35.073606 1973 log.go:181] (0xc000c0af20) (0xc0001226e0) Create stream\nI0817 00:13:35.073616 1973 log.go:181] (0xc000c0af20) (0xc0001226e0) Stream added, broadcasting: 5\nI0817 00:13:35.074436 1973 log.go:181] (0xc000c0af20) Reply frame received for 5\nI0817 00:13:35.135381 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.135404 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.135415 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ seq 0 15\n+ echo\nI0817 00:13:35.136175 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.136204 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.136216 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.136229 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.136235 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.136241 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.145449 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.145467 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.145480 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.145837 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.145858 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.145866 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.145876 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.145883 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.145892 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.149377 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.149396 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.149418 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.149962 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.149991 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.150005 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.150024 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.150034 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.150051 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.153338 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.153350 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.153358 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.153759 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.153773 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.153787 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.153797 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.153802 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.153808 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.159250 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.159268 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.159284 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.159728 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.159751 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.159767 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.159806 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.159819 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.159825 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.164880 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.164894 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.164902 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.165617 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.165646 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.165662 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.165677 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.165688 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.165696 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.170917 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.170941 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.170968 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.171542 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.171570 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.171582 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.171597 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.171614 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.171622 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.175054 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.175071 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.175079 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.176191 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.176210 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.176227 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.176248 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.176260 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.176276 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.179476 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.179515 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.179540 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.180332 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.180372 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.180387 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.180405 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.180414 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.180424 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.184074 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.184104 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.184128 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.184309 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.184344 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.184360 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.184389 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.184419 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.184434 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.192953 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.192977 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.192996 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.194025 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.194058 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.194071 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.194095 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.194107 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.194118 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.198714 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.198739 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.198763 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.199260 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.199280 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.199300 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.199307 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.199316 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.199322 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.203660 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.203682 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.203701 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.204134 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.204153 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.204162 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.204173 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.204179 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.204186 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.209879 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.209900 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.209916 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.210504 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.210518 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.210525 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.210550 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.210570 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.210584 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.216051 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.216069 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.216078 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.216575 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.216601 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.216630 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.216645 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\nI0817 00:13:35.216663 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.216679 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.221723 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.221744 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.221762 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.222335 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.222346 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.222359 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.222375 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.222392 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\nI0817 00:13:35.222400 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.222407 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.135.71:80/\nI0817 00:13:35.222424 1973 log.go:181] (0xc0001226e0) (5) Data frame sent\nI0817 00:13:35.222434 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.228327 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.228352 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.228370 1973 log.go:181] (0xc0004448c0) (3) Data frame sent\nI0817 00:13:35.229398 1973 log.go:181] (0xc000c0af20) Data frame received for 3\nI0817 00:13:35.229428 1973 log.go:181] (0xc0004448c0) (3) Data frame handling\nI0817 00:13:35.229519 1973 log.go:181] (0xc000c0af20) Data frame received for 5\nI0817 00:13:35.229535 1973 log.go:181] (0xc0001226e0) (5) Data frame handling\nI0817 00:13:35.231150 1973 log.go:181] (0xc000c0af20) Data frame received for 1\nI0817 00:13:35.231173 1973 log.go:181] (0xc0003761e0) (1) Data frame handling\nI0817 00:13:35.231198 1973 log.go:181] (0xc0003761e0) (1) Data frame sent\nI0817 00:13:35.231223 1973 log.go:181] (0xc000c0af20) (0xc0003761e0) Stream removed, broadcasting: 1\nI0817 00:13:35.231310 1973 log.go:181] (0xc000c0af20) Go away received\nI0817 00:13:35.231663 1973 log.go:181] (0xc000c0af20) (0xc0003761e0) Stream removed, broadcasting: 1\nI0817 00:13:35.231681 1973 log.go:181] (0xc000c0af20) (0xc0004448c0) Stream removed, broadcasting: 3\nI0817 00:13:35.231689 1973 log.go:181] (0xc000c0af20) (0xc0001226e0) Stream removed, broadcasting: 5\n" Aug 17 00:13:35.240: INFO: stdout: "\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz\naffinity-clusterip-transition-krnpz" Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Received response from host: affinity-clusterip-transition-krnpz Aug 17 00:13:35.240: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7905, will wait for the garbage collector to delete the pods Aug 17 00:13:35.568: INFO: Deleting ReplicationController affinity-clusterip-transition took: 144.251613ms Aug 17 00:13:36.268: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 700.208991ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:13:51.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7905" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:48.070 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":148,"skipped":2499,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:13:52.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 00:13:52.179: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 00:13:52.237: INFO: Waiting for terminating namespaces to be deleted... Aug 17 00:13:52.241: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 00:13:52.246: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:13:52.246: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:13:52.246: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:13:52.246: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:13:52.246: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-1794 started at 2020-08-17 00:13:03 +0000 UTC (1 container statuses recorded) Aug 17 00:13:52.246: INFO: Container token-test ready: true, restart count 0 Aug 17 00:13:52.246: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 00:13:52.249: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:13:52.249: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:13:52.249: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 00:13:52.249: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d2927173-be45-49c3-8d0a-f37c7fe7ae8a 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d2927173-be45-49c3-8d0a-f37c7fe7ae8a off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d2927173-be45-49c3-8d0a-f37c7fe7ae8a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:19:04.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9527" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:312.869 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":294,"completed":149,"skipped":2511,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:19:04.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 00:19:05.244: INFO: PodSpec: initContainers in spec.initContainers Aug 17 00:20:04.234: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-52e67b7f-9d16-4df0-aa02-425be65026cf", GenerateName:"", Namespace:"init-container-5052", SelfLink:"/api/v1/namespaces/init-container-5052/pods/pod-init-52e67b7f-9d16-4df0-aa02-425be65026cf", UID:"47ed5c04-ec4d-4fe3-b544-70cc13dcac2c", ResourceVersion:"544665", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733220345, loc:(*time.Location)(0x7e21f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"244402930"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00535f560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00535f580)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00535f5a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00535f5c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6kxcw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005d34280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6kxcw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6kxcw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6kxcw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0059ba038), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0016055e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0059ba0c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0059ba0e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0059ba0e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0059ba0ec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc008e783e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220345, loc:(*time.Location)(0x7e21f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220345, loc:(*time.Location)(0x7e21f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220345, loc:(*time.Location)(0x7e21f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220345, loc:(*time.Location)(0x7e21f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.1.31", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.31"}}, StartTime:(*v1.Time)(0xc00535f5e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001605730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016057a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dbbad34a434496d717b05527fe395d33f1b3f8264b14db4cacc9416abcf04d4a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00535f620), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00535f600), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0059ba16f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:04.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5052" for this suite. • [SLOW TEST:59.337 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":294,"completed":150,"skipped":2511,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:04.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 17 00:20:04.427: INFO: Waiting up to 5m0s for pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0" in namespace "containers-1832" to be "Succeeded or Failed" Aug 17 00:20:04.449: INFO: Pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.568828ms Aug 17 00:20:06.452: INFO: Pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02522607s Aug 17 00:20:08.457: INFO: Pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0294166s Aug 17 00:20:10.459: INFO: Pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032169124s STEP: Saw pod success Aug 17 00:20:10.459: INFO: Pod "client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0" satisfied condition "Succeeded or Failed" Aug 17 00:20:10.461: INFO: Trying to get logs from node latest-worker pod client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0 container test-container: STEP: delete the pod Aug 17 00:20:10.807: INFO: Waiting for pod client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0 to disappear Aug 17 00:20:10.863: INFO: Pod client-containers-5bcef2cb-ea6e-4b09-90c8-f257593f95e0 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1832" for this suite. • [SLOW TEST:6.570 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":294,"completed":151,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:10.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-48d79e23-ba52-4130-a4c9-705a5f17867f STEP: Creating a pod to test consume secrets Aug 17 00:20:11.670: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68" in namespace "projected-5726" to be "Succeeded or Failed" Aug 17 00:20:11.794: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Pending", Reason="", readiness=false. Elapsed: 123.644506ms Aug 17 00:20:13.871: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200664032s Aug 17 00:20:16.076: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405571187s Aug 17 00:20:18.123: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452699966s Aug 17 00:20:20.254: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584254321s Aug 17 00:20:22.259: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589287411s STEP: Saw pod success Aug 17 00:20:22.259: INFO: Pod "pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68" satisfied condition "Succeeded or Failed" Aug 17 00:20:22.261: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68 container projected-secret-volume-test: STEP: delete the pod Aug 17 00:20:22.424: INFO: Waiting for pod pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68 to disappear Aug 17 00:20:22.436: INFO: Pod pod-projected-secrets-5847d209-dffa-4ca4-a44d-2a6fe381da68 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:22.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5726" for this suite. • [SLOW TEST:11.571 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":152,"skipped":2548,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:22.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 00:20:22.578: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 00:20:22.585: INFO: Waiting for terminating namespaces to be deleted... Aug 17 00:20:22.589: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 00:20:22.593: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:20:22.593: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:20:22.593: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:20:22.594: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:20:22.594: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 00:20:22.598: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:20:22.598: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:20:22.598: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 00:20:22.598: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 17 00:20:22.690: INFO: Pod kindnet-gmpqb requesting resource cpu=100m on Node latest-worker Aug 17 00:20:22.690: INFO: Pod kindnet-grzzh requesting resource cpu=100m on Node latest-worker2 Aug 17 00:20:22.690: INFO: Pod kube-proxy-82wrf requesting resource cpu=0m on Node latest-worker Aug 17 00:20:22.690: INFO: Pod kube-proxy-fjk8r requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Aug 17 00:20:22.690: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Aug 17 00:20:22.695: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db.162be6394c11039a], Reason = [Created], Message = [Created container filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db] STEP: Considering event: Type = [Normal], Name = [filler-pod-064021eb-cf42-450c-9e4a-f9a543617568.162be638c42d5324], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-064021eb-cf42-450c-9e4a-f9a543617568.162be63961372963], Reason = [Started], Message = [Started container filler-pod-064021eb-cf42-450c-9e4a-f9a543617568] STEP: Considering event: Type = [Normal], Name = [filler-pod-064021eb-cf42-450c-9e4a-f9a543617568.162be63950af7a1b], Reason = [Created], Message = [Created container filler-pod-064021eb-cf42-450c-9e4a-f9a543617568] STEP: Considering event: Type = [Normal], Name = [filler-pod-064021eb-cf42-450c-9e4a-f9a543617568.162be638644b4cd7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7620/filler-pod-064021eb-cf42-450c-9e4a-f9a543617568 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db.162be638660162ca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7620/filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db.162be6395a68360d], Reason = [Started], Message = [Started container filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a858e81-f37e-46e0-bb39-e75adfae62db.162be638c87d6df3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Warning], Name = [additional-pod.162be639cd3b535a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162be639d40897fb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:29.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7620" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.420 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":294,"completed":153,"skipped":2566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:29.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 00:20:29.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1839' Aug 17 00:20:33.218: INFO: stderr: "" Aug 17 00:20:33.218: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 Aug 17 00:20:33.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1839' Aug 17 00:20:39.686: INFO: stderr: "" Aug 17 00:20:39.686: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:39.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1839" for this suite. • [SLOW TEST:9.933 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":294,"completed":154,"skipped":2590,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:39.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-09d4afa3-72da-4a10-861c-6e0092259936 STEP: Creating a pod to test consume configMaps Aug 17 00:20:39.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35" in namespace "projected-7015" to be "Succeeded or Failed" Aug 17 00:20:39.997: INFO: Pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35": Phase="Pending", Reason="", readiness=false. Elapsed: 3.975621ms Aug 17 00:20:42.135: INFO: Pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142023052s Aug 17 00:20:44.139: INFO: Pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145556982s Aug 17 00:20:46.143: INFO: Pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149786715s STEP: Saw pod success Aug 17 00:20:46.143: INFO: Pod "pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35" satisfied condition "Succeeded or Failed" Aug 17 00:20:46.146: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35 container projected-configmap-volume-test: STEP: delete the pod Aug 17 00:20:46.181: INFO: Waiting for pod pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35 to disappear Aug 17 00:20:46.207: INFO: Pod pod-projected-configmaps-fd8b53cc-aba6-4f00-9d64-f8827d192f35 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:20:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7015" for this suite. • [SLOW TEST:6.430 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":155,"skipped":2598,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:20:46.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 in namespace container-probe-1441 Aug 17 00:20:50.351: INFO: Started pod liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 in namespace container-probe-1441 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 00:20:50.354: INFO: Initial restart count of pod liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is 0 Aug 17 00:21:08.441: INFO: Restart count of pod container-probe-1441/liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is now 1 (18.086673241s elapsed) Aug 17 00:21:30.679: INFO: Restart count of pod container-probe-1441/liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is now 2 (40.3244144s elapsed) Aug 17 00:21:50.466: INFO: Restart count of pod container-probe-1441/liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is now 3 (1m0.111894343s elapsed) Aug 17 00:22:10.554: INFO: Restart count of pod container-probe-1441/liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is now 4 (1m20.199748256s elapsed) Aug 17 00:23:22.048: INFO: Restart count of pod container-probe-1441/liveness-322be7c2-c4a2-4a27-8fda-0b2c5e4b1f84 is now 5 (2m31.693905225s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:23:22.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1441" for this suite. • [SLOW TEST:155.882 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":294,"completed":156,"skipped":2605,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:23:22.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:23:22.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9992" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":294,"completed":157,"skipped":2612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:23:22.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 17 00:23:22.794: INFO: namespace kubectl-9009 Aug 17 00:23:22.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9009' Aug 17 00:23:23.370: INFO: stderr: "" Aug 17 00:23:23.370: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 17 00:23:24.375: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:23:24.375: INFO: Found 0 / 1 Aug 17 00:23:25.404: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:23:25.404: INFO: Found 0 / 1 Aug 17 00:23:26.374: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:23:26.374: INFO: Found 0 / 1 Aug 17 00:23:27.509: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:23:27.510: INFO: Found 1 / 1 Aug 17 00:23:27.510: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 17 00:23:27.528: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:23:27.528: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 00:23:27.528: INFO: wait on agnhost-primary startup in kubectl-9009 Aug 17 00:23:27.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs agnhost-primary-bt6hf agnhost-primary --namespace=kubectl-9009' Aug 17 00:23:27.761: INFO: stderr: "" Aug 17 00:23:27.761: INFO: stdout: "Paused\n" STEP: exposing RC Aug 17 00:23:27.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9009' Aug 17 00:23:27.927: INFO: stderr: "" Aug 17 00:23:27.927: INFO: stdout: "service/rm2 exposed\n" Aug 17 00:23:28.009: INFO: Service rm2 in namespace kubectl-9009 found. STEP: exposing service Aug 17 00:23:30.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9009' Aug 17 00:23:30.156: INFO: stderr: "" Aug 17 00:23:30.157: INFO: stdout: "service/rm3 exposed\n" Aug 17 00:23:30.163: INFO: Service rm3 in namespace kubectl-9009 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:23:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9009" for this suite. • [SLOW TEST:9.539 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1241 should create services for rc [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":294,"completed":158,"skipped":2639,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:23:32.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4930 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 17 00:23:32.278: INFO: Found 0 stateful pods, waiting for 3 Aug 17 00:23:42.283: INFO: Found 2 stateful pods, waiting for 3 Aug 17 00:23:52.284: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:23:52.284: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:23:52.284: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 17 00:23:52.315: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 17 00:24:02.365: INFO: Updating stateful set ss2 Aug 17 00:24:02.401: INFO: Waiting for Pod statefulset-4930/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 17 00:24:12.847: INFO: Found 2 stateful pods, waiting for 3 Aug 17 00:24:22.873: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:24:22.873: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:24:22.873: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 17 00:24:22.898: INFO: Updating stateful set ss2 Aug 17 00:24:23.002: INFO: Waiting for Pod statefulset-4930/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 00:24:33.028: INFO: Updating stateful set ss2 Aug 17 00:24:33.058: INFO: Waiting for StatefulSet statefulset-4930/ss2 to complete update Aug 17 00:24:33.058: INFO: Waiting for Pod statefulset-4930/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 00:24:43.087: INFO: Deleting all statefulset in ns statefulset-4930 Aug 17 00:24:43.089: INFO: Scaling statefulset ss2 to 0 Aug 17 00:25:03.110: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 00:25:03.113: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:25:03.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4930" for this suite. • [SLOW TEST:90.981 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":294,"completed":159,"skipped":2656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:25:03.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 17 00:25:03.212: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:25:20.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-593" for this suite. • [SLOW TEST:17.311 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":294,"completed":160,"skipped":2689,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:25:20.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9047 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9047 STEP: creating replication controller externalsvc in namespace services-9047 I0817 00:25:20.749626 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9047, replica count: 2 I0817 00:25:23.800031 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:25:26.800332 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 17 00:25:26.850: INFO: Creating new exec pod Aug 17 00:25:30.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9047 execpodt2jxk -- /bin/sh -x -c nslookup clusterip-service.services-9047.svc.cluster.local' Aug 17 00:25:31.093: INFO: stderr: "I0817 00:25:31.017206 2098 log.go:181] (0xc0006354a0) (0xc000b13680) Create stream\nI0817 00:25:31.017280 2098 log.go:181] (0xc0006354a0) (0xc000b13680) Stream added, broadcasting: 1\nI0817 00:25:31.021866 2098 log.go:181] (0xc0006354a0) Reply frame received for 1\nI0817 00:25:31.021897 2098 log.go:181] (0xc0006354a0) (0xc000614280) Create stream\nI0817 00:25:31.021906 2098 log.go:181] (0xc0006354a0) (0xc000614280) Stream added, broadcasting: 3\nI0817 00:25:31.022722 2098 log.go:181] (0xc0006354a0) Reply frame received for 3\nI0817 00:25:31.022760 2098 log.go:181] (0xc0006354a0) (0xc000614b40) Create stream\nI0817 00:25:31.022772 2098 log.go:181] (0xc0006354a0) (0xc000614b40) Stream added, broadcasting: 5\nI0817 00:25:31.023504 2098 log.go:181] (0xc0006354a0) Reply frame received for 5\nI0817 00:25:31.072961 2098 log.go:181] (0xc0006354a0) Data frame received for 5\nI0817 00:25:31.072995 2098 log.go:181] (0xc000614b40) (5) Data frame handling\nI0817 00:25:31.073013 2098 log.go:181] (0xc000614b40) (5) Data frame sent\n+ nslookup clusterip-service.services-9047.svc.cluster.local\nI0817 00:25:31.081135 2098 log.go:181] (0xc0006354a0) Data frame received for 3\nI0817 00:25:31.081163 2098 log.go:181] (0xc000614280) (3) Data frame handling\nI0817 00:25:31.081181 2098 log.go:181] (0xc000614280) (3) Data frame sent\nI0817 00:25:31.082170 2098 log.go:181] (0xc0006354a0) Data frame received for 3\nI0817 00:25:31.082195 2098 log.go:181] (0xc000614280) (3) Data frame handling\nI0817 00:25:31.082210 2098 log.go:181] (0xc000614280) (3) Data frame sent\nI0817 00:25:31.082433 2098 log.go:181] (0xc0006354a0) Data frame received for 3\nI0817 00:25:31.082455 2098 log.go:181] (0xc000614280) (3) Data frame handling\nI0817 00:25:31.082561 2098 log.go:181] (0xc0006354a0) Data frame received for 5\nI0817 00:25:31.082577 2098 log.go:181] (0xc000614b40) (5) Data frame handling\nI0817 00:25:31.086845 2098 log.go:181] (0xc0006354a0) Data frame received for 1\nI0817 00:25:31.086865 2098 log.go:181] (0xc000b13680) (1) Data frame handling\nI0817 00:25:31.086874 2098 log.go:181] (0xc000b13680) (1) Data frame sent\nI0817 00:25:31.086886 2098 log.go:181] (0xc0006354a0) (0xc000b13680) Stream removed, broadcasting: 1\nI0817 00:25:31.086902 2098 log.go:181] (0xc0006354a0) Go away received\nI0817 00:25:31.087198 2098 log.go:181] (0xc0006354a0) (0xc000b13680) Stream removed, broadcasting: 1\nI0817 00:25:31.087216 2098 log.go:181] (0xc0006354a0) (0xc000614280) Stream removed, broadcasting: 3\nI0817 00:25:31.087229 2098 log.go:181] (0xc0006354a0) (0xc000614b40) Stream removed, broadcasting: 5\n" Aug 17 00:25:31.094: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9047.svc.cluster.local\tcanonical name = externalsvc.services-9047.svc.cluster.local.\nName:\texternalsvc.services-9047.svc.cluster.local\nAddress: 10.108.61.13\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9047, will wait for the garbage collector to delete the pods Aug 17 00:25:31.153: INFO: Deleting ReplicationController externalsvc took: 6.827334ms Aug 17 00:25:31.554: INFO: Terminating ReplicationController externalsvc pods took: 400.232579ms Aug 17 00:25:36.271: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:25:36.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9047" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:15.871 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":294,"completed":161,"skipped":2702,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:25:36.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:25:36.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-343" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":294,"completed":162,"skipped":2706,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:25:36.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:25:36.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 17 00:25:37.248: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:37Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:25:37Z]] name:name1 resourceVersion:546301 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64dc7316-a2f7-4c5a-8153-89facada8d7c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 17 00:25:47.257: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:47Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:25:47Z]] name:name2 resourceVersion:546358 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0f154739-c0e9-4790-b8ef-d016341eb5f1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 17 00:25:57.265: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:25:57Z]] name:name1 resourceVersion:546388 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64dc7316-a2f7-4c5a-8153-89facada8d7c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 17 00:26:07.272: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:26:07Z]] name:name2 resourceVersion:546418 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0f154739-c0e9-4790-b8ef-d016341eb5f1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 17 00:26:17.281: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:25:57Z]] name:name1 resourceVersion:546448 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64dc7316-a2f7-4c5a-8153-89facada8d7c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 17 00:26:27.291: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T00:25:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T00:26:07Z]] name:name2 resourceVersion:546478 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0f154739-c0e9-4790-b8ef-d016341eb5f1] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:26:37.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9563" for this suite. • [SLOW TEST:61.285 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":294,"completed":163,"skipped":2711,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:26:37.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:26:37.915: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 17 00:26:37.924: INFO: Number of nodes with available pods: 0 Aug 17 00:26:37.925: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 17 00:26:38.026: INFO: Number of nodes with available pods: 0 Aug 17 00:26:38.026: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:39.030: INFO: Number of nodes with available pods: 0 Aug 17 00:26:39.030: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:40.031: INFO: Number of nodes with available pods: 0 Aug 17 00:26:40.031: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:41.031: INFO: Number of nodes with available pods: 0 Aug 17 00:26:41.031: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:42.031: INFO: Number of nodes with available pods: 1 Aug 17 00:26:42.031: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 17 00:26:42.099: INFO: Number of nodes with available pods: 1 Aug 17 00:26:42.099: INFO: Number of running nodes: 0, number of available pods: 1 Aug 17 00:26:43.102: INFO: Number of nodes with available pods: 0 Aug 17 00:26:43.102: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 17 00:26:43.163: INFO: Number of nodes with available pods: 0 Aug 17 00:26:43.163: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:44.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:44.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:45.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:45.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:46.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:46.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:47.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:47.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:48.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:48.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:49.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:49.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:50.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:50.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:51.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:51.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:52.168: INFO: Number of nodes with available pods: 0 Aug 17 00:26:52.168: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:53.167: INFO: Number of nodes with available pods: 0 Aug 17 00:26:53.167: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:26:54.167: INFO: Number of nodes with available pods: 1 Aug 17 00:26:54.167: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3013, will wait for the garbage collector to delete the pods Aug 17 00:26:54.233: INFO: Deleting DaemonSet.extensions daemon-set took: 6.534013ms Aug 17 00:26:54.633: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.200777ms Aug 17 00:27:00.136: INFO: Number of nodes with available pods: 0 Aug 17 00:27:00.136: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 00:27:00.194: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3013/daemonsets","resourceVersion":"546630"},"items":null} Aug 17 00:27:00.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3013/pods","resourceVersion":"546630"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3013" for this suite. • [SLOW TEST:22.477 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":294,"completed":164,"skipped":2714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:00.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-1e5b561a-0a1c-473b-abd0-f50d536c10b7 STEP: Creating a pod to test consume configMaps Aug 17 00:27:00.383: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24" in namespace "projected-7366" to be "Succeeded or Failed" Aug 17 00:27:00.401: INFO: Pod "pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24": Phase="Pending", Reason="", readiness=false. Elapsed: 18.479972ms Aug 17 00:27:02.405: INFO: Pod "pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022744668s Aug 17 00:27:04.428: INFO: Pod "pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045480479s STEP: Saw pod success Aug 17 00:27:04.428: INFO: Pod "pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24" satisfied condition "Succeeded or Failed" Aug 17 00:27:04.431: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24 container projected-configmap-volume-test: STEP: delete the pod Aug 17 00:27:04.466: INFO: Waiting for pod pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24 to disappear Aug 17 00:27:04.489: INFO: Pod pod-projected-configmaps-0055761d-0bb6-4f5f-886b-3f3bf8dccf24 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:04.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7366" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":165,"skipped":2746,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:04.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1528 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 00:27:04.902: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 00:27:05.000: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:27:07.213: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:27:09.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:11.005: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:13.005: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:15.005: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:17.005: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:19.005: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:27:21.005: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 00:27:21.012: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 00:27:23.017: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 00:27:27.053: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostname&protocol=udp&host=10.244.2.53&port=8081&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:27:27.053: INFO: >>> kubeConfig: /root/.kube/config I0817 00:27:27.078374 7 log.go:181] (0xc002fa84d0) (0xc001a83b80) Create stream I0817 00:27:27.078405 7 log.go:181] (0xc002fa84d0) (0xc001a83b80) Stream added, broadcasting: 1 I0817 00:27:27.081216 7 log.go:181] (0xc002fa84d0) Reply frame received for 1 I0817 00:27:27.081252 7 log.go:181] (0xc002fa84d0) (0xc00092c1e0) Create stream I0817 00:27:27.081286 7 log.go:181] (0xc002fa84d0) (0xc00092c1e0) Stream added, broadcasting: 3 I0817 00:27:27.082366 7 log.go:181] (0xc002fa84d0) Reply frame received for 3 I0817 00:27:27.082400 7 log.go:181] (0xc002fa84d0) (0xc0035f6b40) Create stream I0817 00:27:27.082412 7 log.go:181] (0xc002fa84d0) (0xc0035f6b40) Stream added, broadcasting: 5 I0817 00:27:27.083261 7 log.go:181] (0xc002fa84d0) Reply frame received for 5 I0817 00:27:27.146880 7 log.go:181] (0xc002fa84d0) Data frame received for 3 I0817 00:27:27.146904 7 log.go:181] (0xc00092c1e0) (3) Data frame handling I0817 00:27:27.146932 7 log.go:181] (0xc00092c1e0) (3) Data frame sent I0817 00:27:27.147465 7 log.go:181] (0xc002fa84d0) Data frame received for 5 I0817 00:27:27.147493 7 log.go:181] (0xc0035f6b40) (5) Data frame handling I0817 00:27:27.147529 7 log.go:181] (0xc002fa84d0) Data frame received for 3 I0817 00:27:27.147553 7 log.go:181] (0xc00092c1e0) (3) Data frame handling I0817 00:27:27.149307 7 log.go:181] (0xc002fa84d0) Data frame received for 1 I0817 00:27:27.149344 7 log.go:181] (0xc001a83b80) (1) Data frame handling I0817 00:27:27.149375 7 log.go:181] (0xc001a83b80) (1) Data frame sent I0817 00:27:27.149399 7 log.go:181] (0xc002fa84d0) (0xc001a83b80) Stream removed, broadcasting: 1 I0817 00:27:27.149460 7 log.go:181] (0xc002fa84d0) Go away received I0817 00:27:27.149507 7 log.go:181] (0xc002fa84d0) (0xc001a83b80) Stream removed, broadcasting: 1 I0817 00:27:27.149531 7 log.go:181] (0xc002fa84d0) (0xc00092c1e0) Stream removed, broadcasting: 3 I0817 00:27:27.149569 7 log.go:181] (0xc002fa84d0) (0xc0035f6b40) Stream removed, broadcasting: 5 Aug 17 00:27:27.149: INFO: Waiting for responses: map[] Aug 17 00:27:27.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostname&protocol=udp&host=10.244.1.42&port=8081&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:27:27.153: INFO: >>> kubeConfig: /root/.kube/config I0817 00:27:27.189852 7 log.go:181] (0xc003ba06e0) (0xc0037f8f00) Create stream I0817 00:27:27.189895 7 log.go:181] (0xc003ba06e0) (0xc0037f8f00) Stream added, broadcasting: 1 I0817 00:27:27.193068 7 log.go:181] (0xc003ba06e0) Reply frame received for 1 I0817 00:27:27.193123 7 log.go:181] (0xc003ba06e0) (0xc0012a0dc0) Create stream I0817 00:27:27.193141 7 log.go:181] (0xc003ba06e0) (0xc0012a0dc0) Stream added, broadcasting: 3 I0817 00:27:27.194290 7 log.go:181] (0xc003ba06e0) Reply frame received for 3 I0817 00:27:27.194343 7 log.go:181] (0xc003ba06e0) (0xc0037f8fa0) Create stream I0817 00:27:27.194361 7 log.go:181] (0xc003ba06e0) (0xc0037f8fa0) Stream added, broadcasting: 5 I0817 00:27:27.195393 7 log.go:181] (0xc003ba06e0) Reply frame received for 5 I0817 00:27:27.271629 7 log.go:181] (0xc003ba06e0) Data frame received for 3 I0817 00:27:27.271668 7 log.go:181] (0xc0012a0dc0) (3) Data frame handling I0817 00:27:27.271712 7 log.go:181] (0xc0012a0dc0) (3) Data frame sent I0817 00:27:27.272349 7 log.go:181] (0xc003ba06e0) Data frame received for 3 I0817 00:27:27.272384 7 log.go:181] (0xc0012a0dc0) (3) Data frame handling I0817 00:27:27.272414 7 log.go:181] (0xc003ba06e0) Data frame received for 5 I0817 00:27:27.272426 7 log.go:181] (0xc0037f8fa0) (5) Data frame handling I0817 00:27:27.274089 7 log.go:181] (0xc003ba06e0) Data frame received for 1 I0817 00:27:27.274112 7 log.go:181] (0xc0037f8f00) (1) Data frame handling I0817 00:27:27.274130 7 log.go:181] (0xc0037f8f00) (1) Data frame sent I0817 00:27:27.274154 7 log.go:181] (0xc003ba06e0) (0xc0037f8f00) Stream removed, broadcasting: 1 I0817 00:27:27.274183 7 log.go:181] (0xc003ba06e0) Go away received I0817 00:27:27.274302 7 log.go:181] (0xc003ba06e0) (0xc0037f8f00) Stream removed, broadcasting: 1 I0817 00:27:27.274322 7 log.go:181] (0xc003ba06e0) (0xc0012a0dc0) Stream removed, broadcasting: 3 I0817 00:27:27.274331 7 log.go:181] (0xc003ba06e0) (0xc0037f8fa0) Stream removed, broadcasting: 5 Aug 17 00:27:27.274: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:27.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1528" for this suite. • [SLOW TEST:22.785 seconds] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":294,"completed":166,"skipped":2747,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:27.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 00:27:27.401: INFO: Waiting up to 5m0s for pod "downward-api-bec7cfd7-3271-494a-ad78-fcae17318781" in namespace "downward-api-21" to be "Succeeded or Failed" Aug 17 00:27:27.426: INFO: Pod "downward-api-bec7cfd7-3271-494a-ad78-fcae17318781": Phase="Pending", Reason="", readiness=false. Elapsed: 25.090317ms Aug 17 00:27:29.459: INFO: Pod "downward-api-bec7cfd7-3271-494a-ad78-fcae17318781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05780234s Aug 17 00:27:31.463: INFO: Pod "downward-api-bec7cfd7-3271-494a-ad78-fcae17318781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062273978s STEP: Saw pod success Aug 17 00:27:31.463: INFO: Pod "downward-api-bec7cfd7-3271-494a-ad78-fcae17318781" satisfied condition "Succeeded or Failed" Aug 17 00:27:31.466: INFO: Trying to get logs from node latest-worker2 pod downward-api-bec7cfd7-3271-494a-ad78-fcae17318781 container dapi-container: STEP: delete the pod Aug 17 00:27:31.520: INFO: Waiting for pod downward-api-bec7cfd7-3271-494a-ad78-fcae17318781 to disappear Aug 17 00:27:31.566: INFO: Pod downward-api-bec7cfd7-3271-494a-ad78-fcae17318781 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:31.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-21" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":294,"completed":167,"skipped":2756,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:31.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-1c63cc90-098e-42a1-9c8e-36cdb9aa8e55 STEP: Creating a pod to test consume secrets Aug 17 00:27:31.718: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6" in namespace "projected-8847" to be "Succeeded or Failed" Aug 17 00:27:31.751: INFO: Pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.795376ms Aug 17 00:27:34.011: INFO: Pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292674767s Aug 17 00:27:36.014: INFO: Pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296086717s Aug 17 00:27:38.018: INFO: Pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.300615419s STEP: Saw pod success Aug 17 00:27:38.019: INFO: Pod "pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6" satisfied condition "Succeeded or Failed" Aug 17 00:27:38.022: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6 container projected-secret-volume-test: STEP: delete the pod Aug 17 00:27:38.076: INFO: Waiting for pod pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6 to disappear Aug 17 00:27:38.083: INFO: Pod pod-projected-secrets-1cb9c0f8-1693-4613-9132-f01d0bcedee6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:38.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8847" for this suite. • [SLOW TEST:6.519 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":168,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:38.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 17 00:27:38.144: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 17 00:27:38.627: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 17 00:27:40.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5985bbd468\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:27:42.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220858, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5985bbd468\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:27:45.739: INFO: Waited 824.051608ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:46.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-674" for this suite. • [SLOW TEST:8.297 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":294,"completed":169,"skipped":2777,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:46.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 17 00:27:47.033: INFO: Waiting up to 5m0s for pod "pod-4169f061-e894-4d7a-a847-25f11dac36ba" in namespace "emptydir-1295" to be "Succeeded or Failed" Aug 17 00:27:47.064: INFO: Pod "pod-4169f061-e894-4d7a-a847-25f11dac36ba": Phase="Pending", Reason="", readiness=false. Elapsed: 30.197186ms Aug 17 00:27:49.067: INFO: Pod "pod-4169f061-e894-4d7a-a847-25f11dac36ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034094292s Aug 17 00:27:51.071: INFO: Pod "pod-4169f061-e894-4d7a-a847-25f11dac36ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037861784s STEP: Saw pod success Aug 17 00:27:51.071: INFO: Pod "pod-4169f061-e894-4d7a-a847-25f11dac36ba" satisfied condition "Succeeded or Failed" Aug 17 00:27:51.074: INFO: Trying to get logs from node latest-worker2 pod pod-4169f061-e894-4d7a-a847-25f11dac36ba container test-container: STEP: delete the pod Aug 17 00:27:51.102: INFO: Waiting for pod pod-4169f061-e894-4d7a-a847-25f11dac36ba to disappear Aug 17 00:27:51.106: INFO: Pod pod-4169f061-e894-4d7a-a847-25f11dac36ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:51.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1295" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":170,"skipped":2781,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:51.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-498a8c7f-168a-4863-883a-726f86dd1883 STEP: Creating a pod to test consume configMaps Aug 17 00:27:51.221: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3" in namespace "projected-5925" to be "Succeeded or Failed" Aug 17 00:27:51.239: INFO: Pod "pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.961098ms Aug 17 00:27:53.246: INFO: Pod "pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024856682s Aug 17 00:27:55.250: INFO: Pod "pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029140376s STEP: Saw pod success Aug 17 00:27:55.250: INFO: Pod "pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3" satisfied condition "Succeeded or Failed" Aug 17 00:27:55.253: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3 container projected-configmap-volume-test: STEP: delete the pod Aug 17 00:27:55.310: INFO: Waiting for pod pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3 to disappear Aug 17 00:27:55.335: INFO: Pod pod-projected-configmaps-25a9a7d3-a01b-4ec2-956c-6fdc9f0e66c3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:55.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5925" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":171,"skipped":2801,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:27:55.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:27:59.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7875" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":294,"completed":172,"skipped":2805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:00.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:28:00.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5948" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":294,"completed":173,"skipped":2835,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:00.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3707 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3707 STEP: Creating statefulset with conflicting port in namespace statefulset-3707 STEP: Waiting until pod test-pod will start running in namespace statefulset-3707 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3707 Aug 17 00:28:06.292: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: fe86a9b7-7f15-4283-873e-b2895f2d1400, status phase: Pending. Waiting for statefulset controller to delete. Aug 17 00:28:06.629: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: fe86a9b7-7f15-4283-873e-b2895f2d1400, status phase: Failed. Waiting for statefulset controller to delete. Aug 17 00:28:06.650: INFO: Observed stateful pod in namespace: statefulset-3707, name: ss-0, uid: fe86a9b7-7f15-4283-873e-b2895f2d1400, status phase: Failed. Waiting for statefulset controller to delete. Aug 17 00:28:06.666: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3707 STEP: Removing pod with conflicting port in namespace statefulset-3707 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3707 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 00:28:10.798: INFO: Deleting all statefulset in ns statefulset-3707 Aug 17 00:28:10.802: INFO: Scaling statefulset ss to 0 Aug 17 00:28:20.832: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 00:28:20.835: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:28:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3707" for this suite. • [SLOW TEST:20.724 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":294,"completed":174,"skipped":2848,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:20.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:28:20.948: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 17 00:28:23.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 create -f -' Aug 17 00:28:28.455: INFO: stderr: "" Aug 17 00:28:28.455: INFO: stdout: "e2e-test-crd-publish-openapi-1774-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 17 00:28:28.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 delete e2e-test-crd-publish-openapi-1774-crds test-foo' Aug 17 00:28:28.570: INFO: stderr: "" Aug 17 00:28:28.570: INFO: stdout: "e2e-test-crd-publish-openapi-1774-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 17 00:28:28.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 apply -f -' Aug 17 00:28:28.887: INFO: stderr: "" Aug 17 00:28:28.887: INFO: stdout: "e2e-test-crd-publish-openapi-1774-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 17 00:28:28.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 delete e2e-test-crd-publish-openapi-1774-crds test-foo' Aug 17 00:28:29.000: INFO: stderr: "" Aug 17 00:28:29.000: INFO: stdout: "e2e-test-crd-publish-openapi-1774-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 17 00:28:29.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 create -f -' Aug 17 00:28:29.261: INFO: rc: 1 Aug 17 00:28:29.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 apply -f -' Aug 17 00:28:29.550: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 17 00:28:29.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 create -f -' Aug 17 00:28:30.231: INFO: rc: 1 Aug 17 00:28:30.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4805 apply -f -' Aug 17 00:28:30.606: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 17 00:28:30.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1774-crds' Aug 17 00:28:30.887: INFO: stderr: "" Aug 17 00:28:30.887: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1774-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 17 00:28:30.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1774-crds.metadata' Aug 17 00:28:31.244: INFO: stderr: "" Aug 17 00:28:31.244: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1774-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 17 00:28:31.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1774-crds.spec' Aug 17 00:28:31.565: INFO: stderr: "" Aug 17 00:28:31.565: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1774-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 17 00:28:31.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1774-crds.spec.bars' Aug 17 00:28:31.918: INFO: stderr: "" Aug 17 00:28:31.918: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1774-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 17 00:28:31.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1774-crds.spec.bars2' Aug 17 00:28:32.273: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:28:34.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4805" for this suite. • [SLOW TEST:13.421 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":294,"completed":175,"skipped":2850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:34.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 17 00:28:34.350: INFO: Waiting up to 5m0s for pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b" in namespace "emptydir-7582" to be "Succeeded or Failed" Aug 17 00:28:34.363: INFO: Pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.648506ms Aug 17 00:28:36.520: INFO: Pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169268182s Aug 17 00:28:38.524: INFO: Pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b": Phase="Running", Reason="", readiness=true. Elapsed: 4.173468042s Aug 17 00:28:40.628: INFO: Pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277422576s STEP: Saw pod success Aug 17 00:28:40.628: INFO: Pod "pod-7d45624b-7fa4-4c82-ad09-11af4500b31b" satisfied condition "Succeeded or Failed" Aug 17 00:28:40.631: INFO: Trying to get logs from node latest-worker2 pod pod-7d45624b-7fa4-4c82-ad09-11af4500b31b container test-container: STEP: delete the pod Aug 17 00:28:40.680: INFO: Waiting for pod pod-7d45624b-7fa4-4c82-ad09-11af4500b31b to disappear Aug 17 00:28:40.702: INFO: Pod pod-7d45624b-7fa4-4c82-ad09-11af4500b31b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:28:40.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7582" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":176,"skipped":2876,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:40.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 17 00:28:40.948: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:40.959: INFO: Number of nodes with available pods: 0 Aug 17 00:28:40.959: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:28:42.012: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:42.016: INFO: Number of nodes with available pods: 0 Aug 17 00:28:42.016: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:28:43.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:43.201: INFO: Number of nodes with available pods: 0 Aug 17 00:28:43.201: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:28:44.048: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:44.052: INFO: Number of nodes with available pods: 0 Aug 17 00:28:44.052: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:28:44.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:45.001: INFO: Number of nodes with available pods: 0 Aug 17 00:28:45.001: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:28:45.965: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:45.968: INFO: Number of nodes with available pods: 2 Aug 17 00:28:45.968: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 17 00:28:46.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:46.044: INFO: Number of nodes with available pods: 1 Aug 17 00:28:46.044: INFO: Node latest-worker2 is running more than one daemon pod Aug 17 00:28:47.049: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:47.052: INFO: Number of nodes with available pods: 1 Aug 17 00:28:47.052: INFO: Node latest-worker2 is running more than one daemon pod Aug 17 00:28:48.050: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:48.054: INFO: Number of nodes with available pods: 1 Aug 17 00:28:48.054: INFO: Node latest-worker2 is running more than one daemon pod Aug 17 00:28:49.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:49.207: INFO: Number of nodes with available pods: 1 Aug 17 00:28:49.207: INFO: Node latest-worker2 is running more than one daemon pod Aug 17 00:28:50.049: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:28:50.052: INFO: Number of nodes with available pods: 2 Aug 17 00:28:50.052: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7429, will wait for the garbage collector to delete the pods Aug 17 00:28:50.115: INFO: Deleting DaemonSet.extensions daemon-set took: 6.254656ms Aug 17 00:28:50.516: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.215243ms Aug 17 00:28:54.619: INFO: Number of nodes with available pods: 0 Aug 17 00:28:54.619: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 00:28:54.621: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7429/daemonsets","resourceVersion":"547646"},"items":null} Aug 17 00:28:54.623: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7429/pods","resourceVersion":"547646"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:28:54.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7429" for this suite. • [SLOW TEST:13.929 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":294,"completed":177,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:28:54.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 17 00:28:59.221: INFO: Successfully updated pod "pod-update-activedeadlineseconds-78fb5986-b4a8-4304-b249-58353e4f9717" Aug 17 00:28:59.221: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-78fb5986-b4a8-4304-b249-58353e4f9717" in namespace "pods-4010" to be "terminated due to deadline exceeded" Aug 17 00:28:59.263: INFO: Pod "pod-update-activedeadlineseconds-78fb5986-b4a8-4304-b249-58353e4f9717": Phase="Running", Reason="", readiness=true. Elapsed: 41.95141ms Aug 17 00:29:01.267: INFO: Pod "pod-update-activedeadlineseconds-78fb5986-b4a8-4304-b249-58353e4f9717": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.046136997s Aug 17 00:29:01.267: INFO: Pod "pod-update-activedeadlineseconds-78fb5986-b4a8-4304-b249-58353e4f9717" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:29:01.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4010" for this suite. • [SLOW TEST:6.635 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":294,"completed":178,"skipped":3006,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:29:01.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:29:02.202: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:29:04.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220942, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220942, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220942, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733220942, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:29:07.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:29:09.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-763" for this suite. STEP: Destroying namespace "webhook-763-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.045 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":294,"completed":179,"skipped":3007,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:29:09.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:29:09.447: INFO: Create a RollingUpdate DaemonSet Aug 17 00:29:09.452: INFO: Check that daemon pods launch on every node of the cluster Aug 17 00:29:09.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:09.506: INFO: Number of nodes with available pods: 0 Aug 17 00:29:09.506: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:29:10.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:10.515: INFO: Number of nodes with available pods: 0 Aug 17 00:29:10.515: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:29:11.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:11.515: INFO: Number of nodes with available pods: 0 Aug 17 00:29:11.515: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:29:12.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:12.515: INFO: Number of nodes with available pods: 0 Aug 17 00:29:12.515: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:29:13.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:13.521: INFO: Number of nodes with available pods: 1 Aug 17 00:29:13.521: INFO: Node latest-worker is running more than one daemon pod Aug 17 00:29:14.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:14.515: INFO: Number of nodes with available pods: 2 Aug 17 00:29:14.515: INFO: Number of running nodes: 2, number of available pods: 2 Aug 17 00:29:14.515: INFO: Update the DaemonSet to trigger a rollout Aug 17 00:29:14.522: INFO: Updating DaemonSet daemon-set Aug 17 00:29:19.545: INFO: Roll back the DaemonSet before rollout is complete Aug 17 00:29:19.550: INFO: Updating DaemonSet daemon-set Aug 17 00:29:19.550: INFO: Make sure DaemonSet rollback is complete Aug 17 00:29:19.561: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:19.561: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:19.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:20.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:20.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:20.613: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:21.611: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:21.611: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:21.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:22.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:22.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:22.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:23.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:23.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:23.614: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:24.611: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:24.611: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:24.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:25.611: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:25.611: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:25.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:26.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:26.611: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:26.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:27.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:27.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:27.616: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:28.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:28.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:28.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:29.610: INFO: Wrong image for pod: daemon-set-snl4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 00:29:29.610: INFO: Pod daemon-set-snl4x is not available Aug 17 00:29:29.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 00:29:30.610: INFO: Pod daemon-set-v8s2t is not available Aug 17 00:29:30.615: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9294, will wait for the garbage collector to delete the pods Aug 17 00:29:30.683: INFO: Deleting DaemonSet.extensions daemon-set took: 6.662971ms Aug 17 00:29:31.283: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.112649ms Aug 17 00:29:40.086: INFO: Number of nodes with available pods: 0 Aug 17 00:29:40.086: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 00:29:40.089: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9294/daemonsets","resourceVersion":"547986"},"items":null} Aug 17 00:29:40.091: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9294/pods","resourceVersion":"547986"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:29:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9294" for this suite. • [SLOW TEST:30.785 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":294,"completed":180,"skipped":3012,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:29:40.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:31:40.307: INFO: Deleting pod "var-expansion-c461fe7f-f59a-4e89-b84a-34544c55962c" in namespace "var-expansion-5634" Aug 17 00:31:40.312: INFO: Wait up to 5m0s for pod "var-expansion-c461fe7f-f59a-4e89-b84a-34544c55962c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:31:44.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5634" for this suite. • [SLOW TEST:124.471 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":294,"completed":181,"skipped":3022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:31:44.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 17 00:31:57.014: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3332 PodName:pod-sharedvolume-0e6f6317-f5df-4198-8eed-4f3c03d8a084 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:31:57.014: INFO: >>> kubeConfig: /root/.kube/config I0817 00:31:57.043126 7 log.go:181] (0xc002952370) (0xc0032492c0) Create stream I0817 00:31:57.043172 7 log.go:181] (0xc002952370) (0xc0032492c0) Stream added, broadcasting: 1 I0817 00:31:57.045020 7 log.go:181] (0xc002952370) Reply frame received for 1 I0817 00:31:57.045050 7 log.go:181] (0xc002952370) (0xc0002577c0) Create stream I0817 00:31:57.045061 7 log.go:181] (0xc002952370) (0xc0002577c0) Stream added, broadcasting: 3 I0817 00:31:57.045815 7 log.go:181] (0xc002952370) Reply frame received for 3 I0817 00:31:57.045840 7 log.go:181] (0xc002952370) (0xc00167de00) Create stream I0817 00:31:57.045850 7 log.go:181] (0xc002952370) (0xc00167de00) Stream added, broadcasting: 5 I0817 00:31:57.046596 7 log.go:181] (0xc002952370) Reply frame received for 5 I0817 00:31:57.118664 7 log.go:181] (0xc002952370) Data frame received for 5 I0817 00:31:57.118708 7 log.go:181] (0xc00167de00) (5) Data frame handling I0817 00:31:57.118733 7 log.go:181] (0xc002952370) Data frame received for 3 I0817 00:31:57.118751 7 log.go:181] (0xc0002577c0) (3) Data frame handling I0817 00:31:57.118762 7 log.go:181] (0xc0002577c0) (3) Data frame sent I0817 00:31:57.118771 7 log.go:181] (0xc002952370) Data frame received for 3 I0817 00:31:57.118778 7 log.go:181] (0xc0002577c0) (3) Data frame handling I0817 00:31:57.123240 7 log.go:181] (0xc002952370) Data frame received for 1 I0817 00:31:57.123277 7 log.go:181] (0xc0032492c0) (1) Data frame handling I0817 00:31:57.123316 7 log.go:181] (0xc0032492c0) (1) Data frame sent I0817 00:31:57.123349 7 log.go:181] (0xc002952370) (0xc0032492c0) Stream removed, broadcasting: 1 I0817 00:31:57.123383 7 log.go:181] (0xc002952370) Go away received I0817 00:31:57.123489 7 log.go:181] (0xc002952370) (0xc0032492c0) Stream removed, broadcasting: 1 I0817 00:31:57.123507 7 log.go:181] (0xc002952370) (0xc0002577c0) Stream removed, broadcasting: 3 I0817 00:31:57.123517 7 log.go:181] (0xc002952370) (0xc00167de00) Stream removed, broadcasting: 5 Aug 17 00:31:57.123: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:31:57.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3332" for this suite. • [SLOW TEST:12.552 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":294,"completed":182,"skipped":3046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:31:57.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8425, will wait for the garbage collector to delete the pods Aug 17 00:32:05.330: INFO: Deleting Job.batch foo took: 11.244372ms Aug 17 00:32:05.730: INFO: Terminating Job.batch foo pods took: 400.258632ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:32:49.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8425" for this suite. • [SLOW TEST:52.612 seconds] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":294,"completed":183,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:32:49.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:32:50.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:32:52.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221170, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221170, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221170, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221170, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:32:55.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 17 00:32:59.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config attach --namespace=webhook-596 to-be-attached-pod -i -c=container1' Aug 17 00:32:59.537: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:32:59.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-596" for this suite. STEP: Destroying namespace "webhook-596-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":294,"completed":184,"skipped":3118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:32:59.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-7fc9b775-14f3-4743-9911-8f7baad7bb9e STEP: Creating a pod to test consume secrets Aug 17 00:32:59.934: INFO: Waiting up to 5m0s for pod "pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade" in namespace "secrets-1482" to be "Succeeded or Failed" Aug 17 00:32:59.978: INFO: Pod "pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade": Phase="Pending", Reason="", readiness=false. Elapsed: 44.277524ms Aug 17 00:33:01.983: INFO: Pod "pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048680548s Aug 17 00:33:03.986: INFO: Pod "pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052483543s STEP: Saw pod success Aug 17 00:33:03.986: INFO: Pod "pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade" satisfied condition "Succeeded or Failed" Aug 17 00:33:03.989: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade container secret-volume-test: STEP: delete the pod Aug 17 00:33:04.042: INFO: Waiting for pod pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade to disappear Aug 17 00:33:04.079: INFO: Pod pod-secrets-01faee56-a808-4e0f-bc40-dd1d23302ade no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:33:04.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1482" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":185,"skipped":3144,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:33:04.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:33:05.858: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:33:07.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221186, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221186, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221185, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:33:11.009: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:33:11.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1067-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:33:12.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4214" for this suite. STEP: Destroying namespace "webhook-4214-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.224 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":294,"completed":186,"skipped":3146,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:33:12.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 17 00:33:12.461: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548906 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:33:12.461: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548906 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 17 00:33:22.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548952 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:33:22.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548952 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 17 00:33:32.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548982 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:33:32.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 548982 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 17 00:33:42.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 549012 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:33:42.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-a 0ec1057a-e76f-4a12-9cac-2545def08b6b 549012 0 2020-08-17 00:33:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 17 00:33:52.560: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-b bb73a067-6a11-471a-89ed-ad4a35a9f879 549041 0 2020-08-17 00:33:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:33:52.560: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-b bb73a067-6a11-471a-89ed-ad4a35a9f879 549041 0 2020-08-17 00:33:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 17 00:34:02.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-b bb73a067-6a11-471a-89ed-ad4a35a9f879 549072 0 2020-08-17 00:33:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:34:02.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6627 /api/v1/namespaces/watch-6627/configmaps/e2e-watch-test-configmap-b bb73a067-6a11-471a-89ed-ad4a35a9f879 549072 0 2020-08-17 00:33:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 00:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:12.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6627" for this suite. • [SLOW TEST:60.397 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":294,"completed":187,"skipped":3151,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:12.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:34:13.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:34:15.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221253, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221253, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221253, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221253, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:34:18.545: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:18.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3738" for this suite. STEP: Destroying namespace "webhook-3738-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.139 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":294,"completed":188,"skipped":3156,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-020eb9cb-1822-4d2b-b452-1ade08396a75 STEP: Creating a pod to test consume configMaps Aug 17 00:34:18.954: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c" in namespace "projected-8717" to be "Succeeded or Failed" Aug 17 00:34:18.982: INFO: Pod "pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.128694ms Aug 17 00:34:20.987: INFO: Pod "pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032661823s Aug 17 00:34:22.990: INFO: Pod "pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036276969s STEP: Saw pod success Aug 17 00:34:22.990: INFO: Pod "pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c" satisfied condition "Succeeded or Failed" Aug 17 00:34:22.993: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c container projected-configmap-volume-test: STEP: delete the pod Aug 17 00:34:23.071: INFO: Waiting for pod pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c to disappear Aug 17 00:34:23.080: INFO: Pod pod-projected-configmaps-45543501-da6a-478c-8562-140414b0796c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8717" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":189,"skipped":3156,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:23.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-6296 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6296 STEP: Deleting pre-stop pod Aug 17 00:34:36.448: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:36.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6296" for this suite. • [SLOW TEST:13.291 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":294,"completed":190,"skipped":3160,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:36.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:36.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5084" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":294,"completed":191,"skipped":3161,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:36.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9804/configmap-test-a7b0f9a6-88de-43cd-8c09-616058d0c2f0 STEP: Creating a pod to test consume configMaps Aug 17 00:34:36.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8" in namespace "configmap-9804" to be "Succeeded or Failed" Aug 17 00:34:36.990: INFO: Pod "pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118886ms Aug 17 00:34:38.994: INFO: Pod "pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01471106s Aug 17 00:34:40.999: INFO: Pod "pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019198688s STEP: Saw pod success Aug 17 00:34:40.999: INFO: Pod "pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8" satisfied condition "Succeeded or Failed" Aug 17 00:34:41.002: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8 container env-test: STEP: delete the pod Aug 17 00:34:41.069: INFO: Waiting for pod pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8 to disappear Aug 17 00:34:41.079: INFO: Pod pod-configmaps-8f1755d8-f2c0-41a0-a8d2-05f8b8d6c1b8 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:34:41.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9804" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":294,"completed":192,"skipped":3161,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:34:41.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 17 00:34:41.182: INFO: Waiting up to 1m0s for all nodes to be ready Aug 17 00:35:41.205: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 17 00:35:41.227: INFO: Created pod: pod0-sched-preemption-low-priority Aug 17 00:35:41.282: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:35:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8751" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:78.395 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":294,"completed":193,"skipped":3162,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:35:59.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 17 00:35:59.629: INFO: Waiting up to 5m0s for pod "pod-f5b0746a-d203-4bca-af60-f132ef5321ca" in namespace "emptydir-8045" to be "Succeeded or Failed" Aug 17 00:35:59.670: INFO: Pod "pod-f5b0746a-d203-4bca-af60-f132ef5321ca": Phase="Pending", Reason="", readiness=false. Elapsed: 41.065377ms Aug 17 00:36:01.734: INFO: Pod "pod-f5b0746a-d203-4bca-af60-f132ef5321ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105571788s Aug 17 00:36:03.738: INFO: Pod "pod-f5b0746a-d203-4bca-af60-f132ef5321ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108887318s STEP: Saw pod success Aug 17 00:36:03.738: INFO: Pod "pod-f5b0746a-d203-4bca-af60-f132ef5321ca" satisfied condition "Succeeded or Failed" Aug 17 00:36:03.740: INFO: Trying to get logs from node latest-worker pod pod-f5b0746a-d203-4bca-af60-f132ef5321ca container test-container: STEP: delete the pod Aug 17 00:36:03.787: INFO: Waiting for pod pod-f5b0746a-d203-4bca-af60-f132ef5321ca to disappear Aug 17 00:36:03.801: INFO: Pod pod-f5b0746a-d203-4bca-af60-f132ef5321ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:36:03.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8045" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":194,"skipped":3181,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:36:03.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 17 00:36:04.055: INFO: Waiting up to 1m0s for all nodes to be ready Aug 17 00:37:04.078: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:37:04.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 17 00:37:10.988: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:37:23.104: INFO: pods created so far: [1 1 1] Aug 17 00:37:23.104: INFO: length of pods created so far: 3 Aug 17 00:37:35.204: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:37:42.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2487" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:37:42.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7096" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:98.718 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":294,"completed":195,"skipped":3190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:37:42.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:37:42.607: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 17 00:37:44.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3639 create -f -' Aug 17 00:37:50.804: INFO: stderr: "" Aug 17 00:37:50.804: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 17 00:37:50.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3639 delete e2e-test-crd-publish-openapi-1331-crds test-cr' Aug 17 00:37:50.930: INFO: stderr: "" Aug 17 00:37:50.930: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 17 00:37:50.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3639 apply -f -' Aug 17 00:37:51.203: INFO: stderr: "" Aug 17 00:37:51.203: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 17 00:37:51.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3639 delete e2e-test-crd-publish-openapi-1331-crds test-cr' Aug 17 00:37:51.305: INFO: stderr: "" Aug 17 00:37:51.305: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 17 00:37:51.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1331-crds' Aug 17 00:37:51.578: INFO: stderr: "" Aug 17 00:37:51.578: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1331-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:37:54.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3639" for this suite. • [SLOW TEST:12.036 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":294,"completed":196,"skipped":3241,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:37:54.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:37:54.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4714" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":294,"completed":197,"skipped":3253,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:37:54.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:37:55.060: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:37:56.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6190" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":294,"completed":198,"skipped":3268,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:37:56.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 17 00:37:56.733: INFO: Waiting up to 5m0s for pod "pod-955b3b24-875a-4441-80f9-23d813532237" in namespace "emptydir-3040" to be "Succeeded or Failed" Aug 17 00:37:56.742: INFO: Pod "pod-955b3b24-875a-4441-80f9-23d813532237": Phase="Pending", Reason="", readiness=false. Elapsed: 9.422416ms Aug 17 00:37:58.845: INFO: Pod "pod-955b3b24-875a-4441-80f9-23d813532237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112192794s Aug 17 00:38:01.067: INFO: Pod "pod-955b3b24-875a-4441-80f9-23d813532237": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334193204s Aug 17 00:38:03.070: INFO: Pod "pod-955b3b24-875a-4441-80f9-23d813532237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.337298068s STEP: Saw pod success Aug 17 00:38:03.070: INFO: Pod "pod-955b3b24-875a-4441-80f9-23d813532237" satisfied condition "Succeeded or Failed" Aug 17 00:38:03.073: INFO: Trying to get logs from node latest-worker pod pod-955b3b24-875a-4441-80f9-23d813532237 container test-container: STEP: delete the pod Aug 17 00:38:03.119: INFO: Waiting for pod pod-955b3b24-875a-4441-80f9-23d813532237 to disappear Aug 17 00:38:03.133: INFO: Pod pod-955b3b24-875a-4441-80f9-23d813532237 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:38:03.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3040" for this suite. • [SLOW TEST:6.523 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":199,"skipped":3279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:38:03.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 17 00:38:03.201: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:38:18.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3881" for this suite. • [SLOW TEST:15.234 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":294,"completed":200,"skipped":3307,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:38:18.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5f131b9c-8030-4d60-88bd-f67b872cfeeb STEP: Creating a pod to test consume configMaps Aug 17 00:38:18.433: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de" in namespace "projected-9327" to be "Succeeded or Failed" Aug 17 00:38:18.438: INFO: Pod "pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323177ms Aug 17 00:38:20.571: INFO: Pod "pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138259743s Aug 17 00:38:22.576: INFO: Pod "pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142538496s STEP: Saw pod success Aug 17 00:38:22.576: INFO: Pod "pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de" satisfied condition "Succeeded or Failed" Aug 17 00:38:22.578: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de container projected-configmap-volume-test: STEP: delete the pod Aug 17 00:38:22.811: INFO: Waiting for pod pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de to disappear Aug 17 00:38:22.845: INFO: Pod pod-projected-configmaps-c6b54b54-f662-43c6-b8e4-c3de0eac12de no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:38:22.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9327" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":201,"skipped":3322,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:38:23.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0817 00:38:25.747940 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 00:39:27.917: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:39:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5134" for this suite. • [SLOW TEST:64.896 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":294,"completed":202,"skipped":3330,"failed":0} SS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:39:27.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 00:39:28.996: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 17 00:39:29.021: INFO: starting watch STEP: patching STEP: updating Aug 17 00:39:29.033: INFO: waiting for watch events with expected annotations Aug 17 00:39:29.033: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:39:29.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7941" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":294,"completed":203,"skipped":3332,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:39:29.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:39:30.239: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:39:32.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:39:34.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:39:36.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221570, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:39:39.416: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:39:39.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6594" for this suite. STEP: Destroying namespace "webhook-6594-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.454 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":294,"completed":204,"skipped":3336,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:39:39.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-9427dc83-5b35-4257-936d-b6cd5b78cb7e [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:39:39.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2541" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":294,"completed":205,"skipped":3355,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:39:39.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:03.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-38" for this suite. • [SLOW TEST:24.064 seconds] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":294,"completed":206,"skipped":3359,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:03.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 00:40:03.869: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4" in namespace "projected-1601" to be "Succeeded or Failed" Aug 17 00:40:03.889: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.76361ms Aug 17 00:40:06.045: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17625769s Aug 17 00:40:08.167: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29760524s Aug 17 00:40:10.254: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4": Phase="Running", Reason="", readiness=true. Elapsed: 6.385004196s Aug 17 00:40:12.259: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.389508677s STEP: Saw pod success Aug 17 00:40:12.259: INFO: Pod "downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4" satisfied condition "Succeeded or Failed" Aug 17 00:40:12.262: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4 container client-container: STEP: delete the pod Aug 17 00:40:12.335: INFO: Waiting for pod downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4 to disappear Aug 17 00:40:12.351: INFO: Pod downwardapi-volume-6a498d5f-b846-4f98-a3df-b951563ea3c4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:12.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1601" for this suite. • [SLOW TEST:8.643 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":207,"skipped":3372,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:12.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b4f9c6a8-c365-4e7f-bda7-a0bee0d0c985 STEP: Creating a pod to test consume secrets Aug 17 00:40:13.119: INFO: Waiting up to 5m0s for pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f" in namespace "secrets-6042" to be "Succeeded or Failed" Aug 17 00:40:13.323: INFO: Pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 203.368847ms Aug 17 00:40:15.327: INFO: Pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207329322s Aug 17 00:40:17.338: INFO: Pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21851348s Aug 17 00:40:19.348: INFO: Pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228779782s STEP: Saw pod success Aug 17 00:40:19.348: INFO: Pod "pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f" satisfied condition "Succeeded or Failed" Aug 17 00:40:19.352: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f container secret-volume-test: STEP: delete the pod Aug 17 00:40:19.459: INFO: Waiting for pod pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f to disappear Aug 17 00:40:19.514: INFO: Pod pod-secrets-a22a4407-afdc-4410-9a75-1587d139dc3f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:19.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6042" for this suite. • [SLOW TEST:7.193 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":208,"skipped":3384,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:19.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ddf30add-00bd-4a18-96b3-f85eb388b682 STEP: Creating a pod to test consume configMaps Aug 17 00:40:19.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80" in namespace "configmap-8127" to be "Succeeded or Failed" Aug 17 00:40:19.633: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007956ms Aug 17 00:40:21.637: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008123228s Aug 17 00:40:23.642: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012583256s Aug 17 00:40:25.646: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016322256s Aug 17 00:40:27.649: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020220699s STEP: Saw pod success Aug 17 00:40:27.650: INFO: Pod "pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80" satisfied condition "Succeeded or Failed" Aug 17 00:40:27.652: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80 container configmap-volume-test: STEP: delete the pod Aug 17 00:40:27.689: INFO: Waiting for pod pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80 to disappear Aug 17 00:40:27.727: INFO: Pod pod-configmaps-cd4f0137-33ca-4722-95a9-cd1583951a80 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:27.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8127" for this suite. • [SLOW TEST:8.182 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":209,"skipped":3392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:27.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:34.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7252" for this suite. • [SLOW TEST:7.178 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":294,"completed":210,"skipped":3435,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:34.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 00:40:35.144: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 00:40:35.274: INFO: Waiting for terminating namespaces to be deleted... Aug 17 00:40:35.277: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 00:40:35.281: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:40:35.281: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:40:35.281: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:40:35.281: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:40:35.281: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 00:40:35.285: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:40:35.285: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:40:35.285: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 00:40:35.285: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8467b44a-fbb9-4b92-ba6b-4b9a436051ca 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8467b44a-fbb9-4b92-ba6b-4b9a436051ca off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8467b44a-fbb9-4b92-ba6b-4b9a436051ca [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:40:45.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5989" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.643 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":294,"completed":211,"skipped":3435,"failed":0} SSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:40:45.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5727 STEP: creating service affinity-nodeport in namespace services-5727 STEP: creating replication controller affinity-nodeport in namespace services-5727 I0817 00:40:45.712373 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5727, replica count: 3 I0817 00:40:48.762836 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:40:51.763050 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:40:54.763325 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 00:40:54.774: INFO: Creating new exec pod Aug 17 00:40:59.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5727 execpod-affinity5mbzz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 17 00:41:00.007: INFO: stderr: "I0817 00:40:59.933644 2460 log.go:181] (0xc000e4afd0) (0xc000b69a40) Create stream\nI0817 00:40:59.933694 2460 log.go:181] (0xc000e4afd0) (0xc000b69a40) Stream added, broadcasting: 1\nI0817 00:40:59.937065 2460 log.go:181] (0xc000e4afd0) Reply frame received for 1\nI0817 00:40:59.937093 2460 log.go:181] (0xc000e4afd0) (0xc0004a4000) Create stream\nI0817 00:40:59.937102 2460 log.go:181] (0xc000e4afd0) (0xc0004a4000) Stream added, broadcasting: 3\nI0817 00:40:59.937723 2460 log.go:181] (0xc000e4afd0) Reply frame received for 3\nI0817 00:40:59.937747 2460 log.go:181] (0xc000e4afd0) (0xc00014bae0) Create stream\nI0817 00:40:59.937755 2460 log.go:181] (0xc000e4afd0) (0xc00014bae0) Stream added, broadcasting: 5\nI0817 00:40:59.938378 2460 log.go:181] (0xc000e4afd0) Reply frame received for 5\nI0817 00:40:59.999545 2460 log.go:181] (0xc000e4afd0) Data frame received for 3\nI0817 00:40:59.999604 2460 log.go:181] (0xc0004a4000) (3) Data frame handling\nI0817 00:40:59.999633 2460 log.go:181] (0xc000e4afd0) Data frame received for 5\nI0817 00:40:59.999645 2460 log.go:181] (0xc00014bae0) (5) Data frame handling\nI0817 00:40:59.999663 2460 log.go:181] (0xc00014bae0) (5) Data frame sent\nI0817 00:40:59.999692 2460 log.go:181] (0xc000e4afd0) Data frame received for 5\nI0817 00:40:59.999705 2460 log.go:181] (0xc00014bae0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0817 00:41:00.001143 2460 log.go:181] (0xc000e4afd0) Data frame received for 1\nI0817 00:41:00.001161 2460 log.go:181] (0xc000b69a40) (1) Data frame handling\nI0817 00:41:00.001173 2460 log.go:181] (0xc000b69a40) (1) Data frame sent\nI0817 00:41:00.001187 2460 log.go:181] (0xc000e4afd0) (0xc000b69a40) Stream removed, broadcasting: 1\nI0817 00:41:00.001205 2460 log.go:181] (0xc000e4afd0) Go away received\nI0817 00:41:00.001576 2460 log.go:181] (0xc000e4afd0) (0xc000b69a40) Stream removed, broadcasting: 1\nI0817 00:41:00.001595 2460 log.go:181] (0xc000e4afd0) (0xc0004a4000) Stream removed, broadcasting: 3\nI0817 00:41:00.001605 2460 log.go:181] (0xc000e4afd0) (0xc00014bae0) Stream removed, broadcasting: 5\n" Aug 17 00:41:00.007: INFO: stdout: "" Aug 17 00:41:00.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5727 execpod-affinity5mbzz -- /bin/sh -x -c nc -zv -t -w 2 10.105.254.128 80' Aug 17 00:41:00.227: INFO: stderr: "I0817 00:41:00.144241 2478 log.go:181] (0xc0006babb0) (0xc000b2a5a0) Create stream\nI0817 00:41:00.144312 2478 log.go:181] (0xc0006babb0) (0xc000b2a5a0) Stream added, broadcasting: 1\nI0817 00:41:00.149282 2478 log.go:181] (0xc0006babb0) Reply frame received for 1\nI0817 00:41:00.149323 2478 log.go:181] (0xc0006babb0) (0xc000b0b040) Create stream\nI0817 00:41:00.149333 2478 log.go:181] (0xc0006babb0) (0xc000b0b040) Stream added, broadcasting: 3\nI0817 00:41:00.150048 2478 log.go:181] (0xc0006babb0) Reply frame received for 3\nI0817 00:41:00.150070 2478 log.go:181] (0xc0006babb0) (0xc0004fe0a0) Create stream\nI0817 00:41:00.150077 2478 log.go:181] (0xc0006babb0) (0xc0004fe0a0) Stream added, broadcasting: 5\nI0817 00:41:00.150846 2478 log.go:181] (0xc0006babb0) Reply frame received for 5\nI0817 00:41:00.215291 2478 log.go:181] (0xc0006babb0) Data frame received for 3\nI0817 00:41:00.215313 2478 log.go:181] (0xc000b0b040) (3) Data frame handling\nI0817 00:41:00.215352 2478 log.go:181] (0xc0006babb0) Data frame received for 5\nI0817 00:41:00.215374 2478 log.go:181] (0xc0004fe0a0) (5) Data frame handling\nI0817 00:41:00.215386 2478 log.go:181] (0xc0004fe0a0) (5) Data frame sent\nI0817 00:41:00.215392 2478 log.go:181] (0xc0006babb0) Data frame received for 5\nI0817 00:41:00.215396 2478 log.go:181] (0xc0004fe0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.254.128 80\nConnection to 10.105.254.128 80 port [tcp/http] succeeded!\nI0817 00:41:00.217330 2478 log.go:181] (0xc0006babb0) Data frame received for 1\nI0817 00:41:00.217353 2478 log.go:181] (0xc000b2a5a0) (1) Data frame handling\nI0817 00:41:00.217365 2478 log.go:181] (0xc000b2a5a0) (1) Data frame sent\nI0817 00:41:00.217377 2478 log.go:181] (0xc0006babb0) (0xc000b2a5a0) Stream removed, broadcasting: 1\nI0817 00:41:00.217477 2478 log.go:181] (0xc0006babb0) Go away received\nI0817 00:41:00.217735 2478 log.go:181] (0xc0006babb0) (0xc000b2a5a0) Stream removed, broadcasting: 1\nI0817 00:41:00.217752 2478 log.go:181] (0xc0006babb0) (0xc000b0b040) Stream removed, broadcasting: 3\nI0817 00:41:00.217759 2478 log.go:181] (0xc0006babb0) (0xc0004fe0a0) Stream removed, broadcasting: 5\n" Aug 17 00:41:00.227: INFO: stdout: "" Aug 17 00:41:00.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5727 execpod-affinity5mbzz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30996' Aug 17 00:41:00.433: INFO: stderr: "I0817 00:41:00.345967 2496 log.go:181] (0xc000647130) (0xc000baf4a0) Create stream\nI0817 00:41:00.346009 2496 log.go:181] (0xc000647130) (0xc000baf4a0) Stream added, broadcasting: 1\nI0817 00:41:00.351186 2496 log.go:181] (0xc000647130) Reply frame received for 1\nI0817 00:41:00.351233 2496 log.go:181] (0xc000647130) (0xc0009b88c0) Create stream\nI0817 00:41:00.351245 2496 log.go:181] (0xc000647130) (0xc0009b88c0) Stream added, broadcasting: 3\nI0817 00:41:00.352152 2496 log.go:181] (0xc000647130) Reply frame received for 3\nI0817 00:41:00.352184 2496 log.go:181] (0xc000647130) (0xc000b9b0e0) Create stream\nI0817 00:41:00.352194 2496 log.go:181] (0xc000647130) (0xc000b9b0e0) Stream added, broadcasting: 5\nI0817 00:41:00.353196 2496 log.go:181] (0xc000647130) Reply frame received for 5\nI0817 00:41:00.422364 2496 log.go:181] (0xc000647130) Data frame received for 5\nI0817 00:41:00.422408 2496 log.go:181] (0xc000b9b0e0) (5) Data frame handling\nI0817 00:41:00.422428 2496 log.go:181] (0xc000b9b0e0) (5) Data frame sent\nI0817 00:41:00.422443 2496 log.go:181] (0xc000647130) Data frame received for 5\nI0817 00:41:00.422455 2496 log.go:181] (0xc000b9b0e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30996\nConnection to 172.18.0.11 30996 port [tcp/30996] succeeded!\nI0817 00:41:00.422497 2496 log.go:181] (0xc000647130) Data frame received for 3\nI0817 00:41:00.422523 2496 log.go:181] (0xc0009b88c0) (3) Data frame handling\nI0817 00:41:00.424542 2496 log.go:181] (0xc000647130) Data frame received for 1\nI0817 00:41:00.424575 2496 log.go:181] (0xc000baf4a0) (1) Data frame handling\nI0817 00:41:00.424597 2496 log.go:181] (0xc000baf4a0) (1) Data frame sent\nI0817 00:41:00.424616 2496 log.go:181] (0xc000647130) (0xc000baf4a0) Stream removed, broadcasting: 1\nI0817 00:41:00.424634 2496 log.go:181] (0xc000647130) Go away received\nI0817 00:41:00.425105 2496 log.go:181] (0xc000647130) (0xc000baf4a0) Stream removed, broadcasting: 1\nI0817 00:41:00.425129 2496 log.go:181] (0xc000647130) (0xc0009b88c0) Stream removed, broadcasting: 3\nI0817 00:41:00.425146 2496 log.go:181] (0xc000647130) (0xc000b9b0e0) Stream removed, broadcasting: 5\n" Aug 17 00:41:00.433: INFO: stdout: "" Aug 17 00:41:00.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5727 execpod-affinity5mbzz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30996' Aug 17 00:41:00.633: INFO: stderr: "I0817 00:41:00.563419 2515 log.go:181] (0xc0007cb130) (0xc00056c8c0) Create stream\nI0817 00:41:00.563478 2515 log.go:181] (0xc0007cb130) (0xc00056c8c0) Stream added, broadcasting: 1\nI0817 00:41:00.568613 2515 log.go:181] (0xc0007cb130) Reply frame received for 1\nI0817 00:41:00.568657 2515 log.go:181] (0xc0007cb130) (0xc00053ebe0) Create stream\nI0817 00:41:00.568670 2515 log.go:181] (0xc0007cb130) (0xc00053ebe0) Stream added, broadcasting: 3\nI0817 00:41:00.573264 2515 log.go:181] (0xc0007cb130) Reply frame received for 3\nI0817 00:41:00.573299 2515 log.go:181] (0xc0007cb130) (0xc000452500) Create stream\nI0817 00:41:00.573309 2515 log.go:181] (0xc0007cb130) (0xc000452500) Stream added, broadcasting: 5\nI0817 00:41:00.574115 2515 log.go:181] (0xc0007cb130) Reply frame received for 5\nI0817 00:41:00.623096 2515 log.go:181] (0xc0007cb130) Data frame received for 5\nI0817 00:41:00.623127 2515 log.go:181] (0xc000452500) (5) Data frame handling\nI0817 00:41:00.623147 2515 log.go:181] (0xc000452500) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30996\nConnection to 172.18.0.14 30996 port [tcp/30996] succeeded!\nI0817 00:41:00.623526 2515 log.go:181] (0xc0007cb130) Data frame received for 5\nI0817 00:41:00.623556 2515 log.go:181] (0xc000452500) (5) Data frame handling\nI0817 00:41:00.623577 2515 log.go:181] (0xc0007cb130) Data frame received for 3\nI0817 00:41:00.623594 2515 log.go:181] (0xc00053ebe0) (3) Data frame handling\nI0817 00:41:00.625640 2515 log.go:181] (0xc0007cb130) Data frame received for 1\nI0817 00:41:00.625666 2515 log.go:181] (0xc00056c8c0) (1) Data frame handling\nI0817 00:41:00.625697 2515 log.go:181] (0xc00056c8c0) (1) Data frame sent\nI0817 00:41:00.625718 2515 log.go:181] (0xc0007cb130) (0xc00056c8c0) Stream removed, broadcasting: 1\nI0817 00:41:00.625862 2515 log.go:181] (0xc0007cb130) Go away received\nI0817 00:41:00.626286 2515 log.go:181] (0xc0007cb130) (0xc00056c8c0) Stream removed, broadcasting: 1\nI0817 00:41:00.626314 2515 log.go:181] (0xc0007cb130) (0xc00053ebe0) Stream removed, broadcasting: 3\nI0817 00:41:00.626325 2515 log.go:181] (0xc0007cb130) (0xc000452500) Stream removed, broadcasting: 5\n" Aug 17 00:41:00.633: INFO: stdout: "" Aug 17 00:41:00.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5727 execpod-affinity5mbzz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30996/ ; done' Aug 17 00:41:00.938: INFO: stderr: "I0817 00:41:00.765138 2533 log.go:181] (0xc000d9ad10) (0xc000f92500) Create stream\nI0817 00:41:00.765224 2533 log.go:181] (0xc000d9ad10) (0xc000f92500) Stream added, broadcasting: 1\nI0817 00:41:00.770139 2533 log.go:181] (0xc000d9ad10) Reply frame received for 1\nI0817 00:41:00.770188 2533 log.go:181] (0xc000d9ad10) (0xc000794b40) Create stream\nI0817 00:41:00.770202 2533 log.go:181] (0xc000d9ad10) (0xc000794b40) Stream added, broadcasting: 3\nI0817 00:41:00.771339 2533 log.go:181] (0xc000d9ad10) Reply frame received for 3\nI0817 00:41:00.771407 2533 log.go:181] (0xc000d9ad10) (0xc00044a640) Create stream\nI0817 00:41:00.771435 2533 log.go:181] (0xc000d9ad10) (0xc00044a640) Stream added, broadcasting: 5\nI0817 00:41:00.772384 2533 log.go:181] (0xc000d9ad10) Reply frame received for 5\nI0817 00:41:00.834348 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.834387 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.834399 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.834449 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.834494 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.834515 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.835475 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.835510 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.835540 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.835751 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.835774 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.835810 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.836067 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.836088 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.836107 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.842962 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.842991 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.843028 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.846122 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.846145 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.846163 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.846198 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.846212 2533 log.go:181] (0xc00044a640) (5) Data frame sent\nI0817 00:41:00.846224 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.846234 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.846247 2533 log.go:181] (0xc000794b40) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.846311 2533 log.go:181] (0xc00044a640) (5) Data frame sent\nI0817 00:41:00.852192 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.852211 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.852228 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.852655 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.852673 2533 log.go:181] (0xc00044a640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.852700 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.852797 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.852825 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.852840 2533 log.go:181] (0xc00044a640) (5) Data frame sent\nI0817 00:41:00.860534 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.860558 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.860569 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.860577 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.860589 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.860594 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.860599 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.860603 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.860613 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.864105 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.864120 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.864128 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.864607 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.864626 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.864634 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.864659 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.864674 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.864687 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.871025 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.871037 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.871043 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.871640 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.871666 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.871679 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.871700 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.871712 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.871723 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.875457 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.875473 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.875486 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.876040 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.876076 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.876094 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.876121 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.876133 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.876146 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.882530 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.882557 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.882587 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.882999 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.883022 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.883036 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.883046 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.883053 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.883060 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.887470 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.887496 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.887518 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.888134 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.888149 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.888158 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.888173 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.888185 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.888204 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.893792 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.893810 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.893825 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.894209 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.894238 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.894250 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.894266 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.894276 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.894297 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.899469 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.899504 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.899529 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.899925 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.899945 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.899963 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.899982 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.900004 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.900028 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.904933 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.904962 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.904987 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.905477 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.905509 2533 log.go:181] (0xc00044a640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/I0817 00:41:00.905521 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.905533 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.905539 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.905548 2533 log.go:181] (0xc00044a640) (5) Data frame sent\nI0817 00:41:00.905554 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.905559 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.905567 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n\nI0817 00:41:00.911835 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.911859 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.911876 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.912479 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.912502 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.912513 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.912524 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.912530 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.912537 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.919610 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.919632 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.919654 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.920264 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.920345 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.920368 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.920381 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.920388 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.920399 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.925848 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.925887 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.925920 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.926278 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.926294 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.926301 2533 log.go:181] (0xc00044a640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30996/\nI0817 00:41:00.926522 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.926535 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.926544 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.931992 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.932011 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.932023 2533 log.go:181] (0xc000794b40) (3) Data frame sent\nI0817 00:41:00.933003 2533 log.go:181] (0xc000d9ad10) Data frame received for 3\nI0817 00:41:00.933033 2533 log.go:181] (0xc000794b40) (3) Data frame handling\nI0817 00:41:00.933062 2533 log.go:181] (0xc000d9ad10) Data frame received for 5\nI0817 00:41:00.933087 2533 log.go:181] (0xc00044a640) (5) Data frame handling\nI0817 00:41:00.934488 2533 log.go:181] (0xc000d9ad10) Data frame received for 1\nI0817 00:41:00.934504 2533 log.go:181] (0xc000f92500) (1) Data frame handling\nI0817 00:41:00.934512 2533 log.go:181] (0xc000f92500) (1) Data frame sent\nI0817 00:41:00.934525 2533 log.go:181] (0xc000d9ad10) (0xc000f92500) Stream removed, broadcasting: 1\nI0817 00:41:00.934559 2533 log.go:181] (0xc000d9ad10) Go away received\nI0817 00:41:00.934842 2533 log.go:181] (0xc000d9ad10) (0xc000f92500) Stream removed, broadcasting: 1\nI0817 00:41:00.934856 2533 log.go:181] (0xc000d9ad10) (0xc000794b40) Stream removed, broadcasting: 3\nI0817 00:41:00.934867 2533 log.go:181] (0xc000d9ad10) (0xc00044a640) Stream removed, broadcasting: 5\n" Aug 17 00:41:00.939: INFO: stdout: "\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7\naffinity-nodeport-jqbf7" Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Received response from host: affinity-nodeport-jqbf7 Aug 17 00:41:00.939: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5727, will wait for the garbage collector to delete the pods Aug 17 00:41:01.998: INFO: Deleting ReplicationController affinity-nodeport took: 7.067148ms Aug 17 00:41:02.798: INFO: Terminating ReplicationController affinity-nodeport pods took: 800.221703ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:20.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5727" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:34.862 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":212,"skipped":3440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:20.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:41:21.040: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:41:23.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221680, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:41:25.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221681, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221680, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:41:28.124: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:28.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2053" for this suite. STEP: Destroying namespace "webhook-2053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.085 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":294,"completed":213,"skipped":3525,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:28.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 00:41:33.208: INFO: Successfully updated pod "annotationupdatebc0dc616-3a55-47a5-b2e6-9e334214d380" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:35.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-373" for this suite. • [SLOW TEST:6.747 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":214,"skipped":3529,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:35.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:39.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1962" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":294,"completed":215,"skipped":3550,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:39.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:41:39.581: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 17 00:41:44.585: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 17 00:41:44.585: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 00:41:44.668: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2755 /apis/apps/v1/namespaces/deployment-2755/deployments/test-cleanup-deployment 9af1b1ea-4ce5-44c6-8454-8ea62dd44ccb 551588 1 2020-08-17 00:41:44 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-17 00:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033d51a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 17 00:41:44.713: INFO: New ReplicaSet "test-cleanup-deployment-bccdddf9b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-bccdddf9b deployment-2755 /apis/apps/v1/namespaces/deployment-2755/replicasets/test-cleanup-deployment-bccdddf9b eb6c3d71-37b7-4712-867f-8d819e667876 551590 1 2020-08-17 00:41:44 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9af1b1ea-4ce5-44c6-8454-8ea62dd44ccb 0xc0039fc2d0 0xc0039fc2d1}] [] [{kube-controller-manager Update apps/v1 2020-08-17 00:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9af1b1ea-4ce5-44c6-8454-8ea62dd44ccb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: bccdddf9b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039fc348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 00:41:44.713: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 17 00:41:44.713: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2755 /apis/apps/v1/namespaces/deployment-2755/replicasets/test-cleanup-controller 106ea833-fb14-4248-bca8-6501d20b7e36 551589 1 2020-08-17 00:41:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9af1b1ea-4ce5-44c6-8454-8ea62dd44ccb 0xc0039fc1c7 0xc0039fc1c8}] [] [{e2e.test Update apps/v1 2020-08-17 00:41:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 00:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"9af1b1ea-4ce5-44c6-8454-8ea62dd44ccb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0039fc268 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 17 00:41:45.047: INFO: Pod "test-cleanup-controller-hzfcq" is available: &Pod{ObjectMeta:{test-cleanup-controller-hzfcq test-cleanup-controller- deployment-2755 /api/v1/namespaces/deployment-2755/pods/test-cleanup-controller-hzfcq 6e3b0029-8371-4c9d-baf7-7d5c08573875 551580 0 2020-08-17 00:41:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 106ea833-fb14-4248-bca8-6501d20b7e36 0xc0039fc7f7 0xc0039fc7f8}] [] [{kube-controller-manager Update v1 2020-08-17 00:41:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"106ea833-fb14-4248-bca8-6501d20b7e36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 00:41:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vtpdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vtpdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vtpdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.89,StartTime:2020-08-17 00:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 00:41:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6468b73d23662154efa7ac493323589b6f37cffeed314d9f4436390d20030f80,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 00:41:45.047: INFO: Pod "test-cleanup-deployment-bccdddf9b-j8ctp" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-bccdddf9b-j8ctp test-cleanup-deployment-bccdddf9b- deployment-2755 /api/v1/namespaces/deployment-2755/pods/test-cleanup-deployment-bccdddf9b-j8ctp cf1dc094-1f82-45b5-8fc5-f45cbae64aff 551595 0 2020-08-17 00:41:44 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-bccdddf9b eb6c3d71-37b7-4712-867f-8d819e667876 0xc0039fc9b0 0xc0039fc9b1}] [] [{kube-controller-manager Update v1 2020-08-17 00:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb6c3d71-37b7-4712-867f-8d819e667876\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vtpdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vtpdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vtpdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 00:41:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:45.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2755" for this suite. • [SLOW TEST:6.141 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":294,"completed":216,"skipped":3555,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:45.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:41:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6503" for this suite. • [SLOW TEST:11.583 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":294,"completed":217,"skipped":3558,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:41:57.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7713.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7713.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 00:42:03.489: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.493: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.495: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.498: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.506: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.508: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.511: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.513: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:03.518: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:08.523: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.526: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.530: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.533: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.540: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.542: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.545: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.547: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:08.555: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:13.524: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.528: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.531: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.535: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.543: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.546: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.549: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.556: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:13.571: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:18.523: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.527: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.534: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.536: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.543: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.546: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.548: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.550: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:18.557: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:23.525: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.529: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.532: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.534: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.542: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.545: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.547: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.550: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:23.555: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:28.524: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.527: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.531: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.534: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.542: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.544: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.547: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.550: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local from pod dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d: the server could not find the requested resource (get pods dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d) Aug 17 00:42:28.555: INFO: Lookups using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7713.svc.cluster.local jessie_udp@dns-test-service-2.dns-7713.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7713.svc.cluster.local] Aug 17 00:42:33.554: INFO: DNS probes using dns-7713/dns-test-f0065fa5-876d-4bb8-b66d-51308fce2d1d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:42:34.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7713" for this suite. • [SLOW TEST:36.970 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":294,"completed":218,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:42:34.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:42:40.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-214" for this suite. • [SLOW TEST:6.103 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":219,"skipped":3600,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:42:40.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3427 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 00:42:40.663: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 00:42:40.855: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:42:42.859: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:42:44.993: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:42:46.873: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:42:48.866: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:42:50.859: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:42:52.858: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:42:54.859: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:42:56.859: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 00:42:56.866: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 00:42:58.871: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 00:43:01.454: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 00:43:12.750: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.93 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:43:12.750: INFO: >>> kubeConfig: /root/.kube/config I0817 00:43:12.792238 7 log.go:181] (0xc003383760) (0xc00391cb40) Create stream I0817 00:43:12.792276 7 log.go:181] (0xc003383760) (0xc00391cb40) Stream added, broadcasting: 1 I0817 00:43:12.795061 7 log.go:181] (0xc003383760) Reply frame received for 1 I0817 00:43:12.795136 7 log.go:181] (0xc003383760) (0xc00242ae60) Create stream I0817 00:43:12.795175 7 log.go:181] (0xc003383760) (0xc00242ae60) Stream added, broadcasting: 3 I0817 00:43:12.796117 7 log.go:181] (0xc003383760) Reply frame received for 3 I0817 00:43:12.796157 7 log.go:181] (0xc003383760) (0xc0017b7ae0) Create stream I0817 00:43:12.796168 7 log.go:181] (0xc003383760) (0xc0017b7ae0) Stream added, broadcasting: 5 I0817 00:43:12.797218 7 log.go:181] (0xc003383760) Reply frame received for 5 I0817 00:43:13.856582 7 log.go:181] (0xc003383760) Data frame received for 3 I0817 00:43:13.856614 7 log.go:181] (0xc00242ae60) (3) Data frame handling I0817 00:43:13.856622 7 log.go:181] (0xc00242ae60) (3) Data frame sent I0817 00:43:13.856627 7 log.go:181] (0xc003383760) Data frame received for 3 I0817 00:43:13.856666 7 log.go:181] (0xc003383760) Data frame received for 5 I0817 00:43:13.856709 7 log.go:181] (0xc0017b7ae0) (5) Data frame handling I0817 00:43:13.856811 7 log.go:181] (0xc00242ae60) (3) Data frame handling I0817 00:43:13.859170 7 log.go:181] (0xc003383760) Data frame received for 1 I0817 00:43:13.859183 7 log.go:181] (0xc00391cb40) (1) Data frame handling I0817 00:43:13.859189 7 log.go:181] (0xc00391cb40) (1) Data frame sent I0817 00:43:13.859196 7 log.go:181] (0xc003383760) (0xc00391cb40) Stream removed, broadcasting: 1 I0817 00:43:13.859266 7 log.go:181] (0xc003383760) Go away received I0817 00:43:13.859408 7 log.go:181] (0xc003383760) (0xc00391cb40) Stream removed, broadcasting: 1 I0817 00:43:13.859430 7 log.go:181] (0xc003383760) (0xc00242ae60) Stream removed, broadcasting: 3 I0817 00:43:13.859441 7 log.go:181] (0xc003383760) (0xc0017b7ae0) Stream removed, broadcasting: 5 Aug 17 00:43:13.859: INFO: Found all expected endpoints: [netserver-0] Aug 17 00:43:14.419: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.72 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:43:14.419: INFO: >>> kubeConfig: /root/.kube/config I0817 00:43:14.457636 7 log.go:181] (0xc002952630) (0xc0023cfb80) Create stream I0817 00:43:14.457665 7 log.go:181] (0xc002952630) (0xc0023cfb80) Stream added, broadcasting: 1 I0817 00:43:14.461398 7 log.go:181] (0xc002952630) Reply frame received for 1 I0817 00:43:14.461430 7 log.go:181] (0xc002952630) (0xc002b6e000) Create stream I0817 00:43:14.461440 7 log.go:181] (0xc002952630) (0xc002b6e000) Stream added, broadcasting: 3 I0817 00:43:14.462499 7 log.go:181] (0xc002952630) Reply frame received for 3 I0817 00:43:14.462523 7 log.go:181] (0xc002952630) (0xc003503f40) Create stream I0817 00:43:14.462531 7 log.go:181] (0xc002952630) (0xc003503f40) Stream added, broadcasting: 5 I0817 00:43:14.463317 7 log.go:181] (0xc002952630) Reply frame received for 5 I0817 00:43:15.508865 7 log.go:181] (0xc002952630) Data frame received for 3 I0817 00:43:15.508911 7 log.go:181] (0xc002b6e000) (3) Data frame handling I0817 00:43:15.508943 7 log.go:181] (0xc002b6e000) (3) Data frame sent I0817 00:43:15.509341 7 log.go:181] (0xc002952630) Data frame received for 5 I0817 00:43:15.509393 7 log.go:181] (0xc003503f40) (5) Data frame handling I0817 00:43:15.509441 7 log.go:181] (0xc002952630) Data frame received for 3 I0817 00:43:15.509486 7 log.go:181] (0xc002b6e000) (3) Data frame handling I0817 00:43:15.511125 7 log.go:181] (0xc002952630) Data frame received for 1 I0817 00:43:15.511146 7 log.go:181] (0xc0023cfb80) (1) Data frame handling I0817 00:43:15.511180 7 log.go:181] (0xc0023cfb80) (1) Data frame sent I0817 00:43:15.511208 7 log.go:181] (0xc002952630) (0xc0023cfb80) Stream removed, broadcasting: 1 I0817 00:43:15.511283 7 log.go:181] (0xc002952630) Go away received I0817 00:43:15.511335 7 log.go:181] (0xc002952630) (0xc0023cfb80) Stream removed, broadcasting: 1 I0817 00:43:15.511361 7 log.go:181] (0xc002952630) (0xc002b6e000) Stream removed, broadcasting: 3 I0817 00:43:15.511375 7 log.go:181] (0xc002952630) (0xc003503f40) Stream removed, broadcasting: 5 Aug 17 00:43:15.511: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:43:15.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3427" for this suite. • [SLOW TEST:35.231 seconds] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":220,"skipped":3612,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:43:15.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-a7885ff9-653c-458f-9c82-20746a2aade5 in namespace container-probe-2962 Aug 17 00:43:27.484: INFO: Started pod liveness-a7885ff9-653c-458f-9c82-20746a2aade5 in namespace container-probe-2962 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 00:43:27.899: INFO: Initial restart count of pod liveness-a7885ff9-653c-458f-9c82-20746a2aade5 is 0 Aug 17 00:43:52.709: INFO: Restart count of pod container-probe-2962/liveness-a7885ff9-653c-458f-9c82-20746a2aade5 is now 1 (24.810031121s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:43:52.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2962" for this suite. • [SLOW TEST:37.281 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":221,"skipped":3618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:43:52.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 00:43:53.362: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 00:43:53.395: INFO: Waiting for terminating namespaces to be deleted... Aug 17 00:43:53.407: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 00:43:53.421: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:43:53.421: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:43:53.421: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:43:53.421: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:43:53.421: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 00:43:53.425: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:43:53.425: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:43:53.425: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 00:43:53.425: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162be780dbb685f1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162be780dd615754], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:43:54.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5343" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":294,"completed":222,"skipped":3641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:43:54.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 00:43:55.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 00:43:57.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 00:43:59.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733221835, loc:(*time.Location)(0x7e21f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 00:44:02.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:02.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5587" for this suite. STEP: Destroying namespace "webhook-5587-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.208 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":294,"completed":223,"skipped":3697,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:02.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:44:02.705: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:11.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8643" for this suite. • [SLOW TEST:8.945 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":294,"completed":224,"skipped":3710,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:11.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 17 00:44:12.041: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552389 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:44:12.041: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552391 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:44:12.041: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552392 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 17 00:44:22.583: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552429 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:44:22.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552430 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 00:44:22.584: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2433 /api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-label-changed 00f61eb3-0b33-4d8f-8d9d-c74a8f9ca666 552431 0 2020-08-17 00:44:11 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 00:44:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:22.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2433" for this suite. • [SLOW TEST:11.404 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":294,"completed":225,"skipped":3723,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:23.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:44:23.840: INFO: Waiting up to 5m0s for pod "busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a" in namespace "security-context-test-6584" to be "Succeeded or Failed" Aug 17 00:44:23.892: INFO: Pod "busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.486074ms Aug 17 00:44:25.903: INFO: Pod "busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06386854s Aug 17 00:44:27.910: INFO: Pod "busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06995008s Aug 17 00:44:27.910: INFO: Pod "busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:27.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6584" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":226,"skipped":3743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:27.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 00:44:28.003: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 00:44:28.022: INFO: Waiting for terminating namespaces to be deleted... Aug 17 00:44:28.037: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 00:44:28.041: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:44:28.041: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:44:28.041: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:44:28.041: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:44:28.041: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 00:44:28.045: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 00:44:28.045: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 00:44:28.045: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 00:44:28.045: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 00:44:28.045: INFO: busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a from security-context-test-6584 started at 2020-08-17 00:44:24 +0000 UTC (1 container statuses recorded) Aug 17 00:44:28.045: INFO: Container busybox-user-65534-85eb2ed9-d21c-45fc-bcfb-397344d4d05a ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0ef2385c-e3d8-403a-b018-ebaeb1602ebc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0ef2385c-e3d8-403a-b018-ebaeb1602ebc off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0ef2385c-e3d8-403a-b018-ebaeb1602ebc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:46.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3465" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.435 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":294,"completed":227,"skipped":3774,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:46.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:44:46.628: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7d6600e4-8578-4c43-94e1-31b21d8d84f9", Controller:(*bool)(0xc00342d4c2), BlockOwnerDeletion:(*bool)(0xc00342d4c3)}} Aug 17 00:44:46.830: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"19ab0097-81e8-41cf-8906-61bdeb2f9033", Controller:(*bool)(0xc0039fcc9a), BlockOwnerDeletion:(*bool)(0xc0039fcc9b)}} Aug 17 00:44:47.007: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0435e00c-9380-43c1-a8e3-b33155c47e1f", Controller:(*bool)(0xc002e097e2), BlockOwnerDeletion:(*bool)(0xc002e097e3)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:44:52.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2976" for this suite. • [SLOW TEST:6.088 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":294,"completed":228,"skipped":3783,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:44:52.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 17 00:46:53.524: INFO: Successfully updated pod "var-expansion-7624a0c5-83d4-4654-9d8b-3677b2bc97c2" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 17 00:46:55.547: INFO: Deleting pod "var-expansion-7624a0c5-83d4-4654-9d8b-3677b2bc97c2" in namespace "var-expansion-6707" Aug 17 00:46:55.551: INFO: Wait up to 5m0s for pod "var-expansion-7624a0c5-83d4-4654-9d8b-3677b2bc97c2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:47:29.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6707" for this suite. • [SLOW TEST:157.177 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":294,"completed":229,"skipped":3800,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:47:29.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0817 00:47:30.795028 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 00:48:32.812: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:48:32.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5013" for this suite. • [SLOW TEST:63.201 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":294,"completed":230,"skipped":3804,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:48:32.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ffa36155-e07f-4a2e-a072-d8e93565ed86 STEP: Creating a pod to test consume secrets Aug 17 00:48:32.928: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c" in namespace "projected-9403" to be "Succeeded or Failed" Aug 17 00:48:32.931: INFO: Pod "pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572616ms Aug 17 00:48:34.936: INFO: Pod "pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008335934s Aug 17 00:48:36.939: INFO: Pod "pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011534004s STEP: Saw pod success Aug 17 00:48:36.939: INFO: Pod "pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c" satisfied condition "Succeeded or Failed" Aug 17 00:48:36.941: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c container projected-secret-volume-test: STEP: delete the pod Aug 17 00:48:37.042: INFO: Waiting for pod pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c to disappear Aug 17 00:48:37.219: INFO: Pod pod-projected-secrets-cf579394-58ff-4c0d-ac36-2eef25c7fa0c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:48:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9403" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":231,"skipped":3810,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:48:37.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:48:54.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4262" for this suite. • [SLOW TEST:17.036 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":294,"completed":232,"skipped":3822,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:48:54.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-khxn STEP: Creating a pod to test atomic-volume-subpath Aug 17 00:48:54.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-khxn" in namespace "subpath-3245" to be "Succeeded or Failed" Aug 17 00:48:54.482: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Pending", Reason="", readiness=false. Elapsed: 62.500806ms Aug 17 00:48:56.643: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223582495s Aug 17 00:48:58.647: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 4.227674257s Aug 17 00:49:00.651: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 6.231980681s Aug 17 00:49:02.655: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 8.236441356s Aug 17 00:49:04.660: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 10.240894096s Aug 17 00:49:06.664: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 12.24545737s Aug 17 00:49:08.668: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 14.248732598s Aug 17 00:49:10.673: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 16.253783415s Aug 17 00:49:12.677: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 18.258168181s Aug 17 00:49:14.680: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 20.261467524s Aug 17 00:49:16.685: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 22.265673842s Aug 17 00:49:18.688: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Running", Reason="", readiness=true. Elapsed: 24.269297345s Aug 17 00:49:20.692: INFO: Pod "pod-subpath-test-projected-khxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.273133291s STEP: Saw pod success Aug 17 00:49:20.692: INFO: Pod "pod-subpath-test-projected-khxn" satisfied condition "Succeeded or Failed" Aug 17 00:49:20.695: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-khxn container test-container-subpath-projected-khxn: STEP: delete the pod Aug 17 00:49:20.929: INFO: Waiting for pod pod-subpath-test-projected-khxn to disappear Aug 17 00:49:20.936: INFO: Pod pod-subpath-test-projected-khxn no longer exists STEP: Deleting pod pod-subpath-test-projected-khxn Aug 17 00:49:20.936: INFO: Deleting pod "pod-subpath-test-projected-khxn" in namespace "subpath-3245" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:49:20.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3245" for this suite. • [SLOW TEST:26.683 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":294,"completed":233,"skipped":3828,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:49:20.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0817 00:49:31.576546 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 00:50:33.602: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:50:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5965" for this suite. • [SLOW TEST:72.664 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":294,"completed":234,"skipped":3831,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:50:33.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 17 00:50:33.749: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 17 00:50:33.772: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 17 00:50:33.772: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 17 00:50:33.790: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 17 00:50:33.790: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 17 00:50:33.873: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 17 00:50:33.873: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 17 00:50:41.707: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:50:42.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3884" for this suite. • [SLOW TEST:8.982 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":294,"completed":235,"skipped":3831,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:50:42.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:50:52.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8406" for this suite. • [SLOW TEST:9.640 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":294,"completed":236,"skipped":3835,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:50:52.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:50:53.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7150' Aug 17 00:51:00.016: INFO: stderr: "" Aug 17 00:51:00.016: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 17 00:51:00.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7150' Aug 17 00:51:00.309: INFO: stderr: "" Aug 17 00:51:00.309: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 17 00:51:01.313: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:51:01.313: INFO: Found 0 / 1 Aug 17 00:51:02.313: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:51:02.314: INFO: Found 0 / 1 Aug 17 00:51:03.342: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:51:03.342: INFO: Found 0 / 1 Aug 17 00:51:04.517: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:51:04.517: INFO: Found 1 / 1 Aug 17 00:51:04.517: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 17 00:51:04.523: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 00:51:04.523: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 00:51:04.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe pod agnhost-primary-w6qhn --namespace=kubectl-7150' Aug 17 00:51:04.644: INFO: stderr: "" Aug 17 00:51:04.644: INFO: stdout: "Name: agnhost-primary-w6qhn\nNamespace: kubectl-7150\nPriority: 0\nNode: latest-worker/172.18.0.11\nStart Time: Mon, 17 Aug 2020 00:51:00 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.104\nIPs:\n IP: 10.244.2.104\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://c2e0c6b7a11d31abd5587fe76e7d2376c05b432ab827c218de0b1c1bbf5a5dda\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 17 Aug 2020 00:51:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-n6q9k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-n6q9k:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-n6q9k\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s Successfully assigned kubectl-7150/agnhost-primary-w6qhn to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker Started container agnhost-primary\n" Aug 17 00:51:04.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-7150' Aug 17 00:51:04.781: INFO: stderr: "" Aug 17 00:51:04.781: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7150\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-w6qhn\n" Aug 17 00:51:04.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-7150' Aug 17 00:51:04.891: INFO: stderr: "" Aug 17 00:51:04.891: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7150\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.108.158.27\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.104:6379\nSession Affinity: None\nEvents: \n" Aug 17 00:51:04.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 17 00:51:05.045: INFO: stderr: "" Aug 17 00:51:05.045: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:42:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 17 Aug 2020 00:51:00 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 17 Aug 2020 00:49:05 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 17 Aug 2020 00:49:05 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 17 Aug 2020 00:49:05 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 17 Aug 2020 00:49:05 +0000 Sat, 15 Aug 2020 09:42:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 355da13825784523b4a253c23edd1334\n System UUID: 8f367e0f-042b-45ff-9966-5ca6bcc1cc56\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-f7hdg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 39h\n kube-system coredns-f9fd979d6-vxzgb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 39h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39h\n kube-system kindnet-qmj2d 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 39h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 39h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 39h\n kube-system kube-proxy-8zfjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 39h\n local-path-storage local-path-provisioner-8b46957d4-csnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 17 00:51:05.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe namespace kubectl-7150' Aug 17 00:51:05.147: INFO: stderr: "" Aug 17 00:51:05.147: INFO: stdout: "Name: kubectl-7150\nLabels: e2e-framework=kubectl\n e2e-run=465b7e17-0d61-4e7d-ade5-cffdb9c07cf9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:51:05.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7150" for this suite. • [SLOW TEST:12.921 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1100 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":294,"completed":237,"skipped":3839,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:51:05.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:51:05.218: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08" in namespace "security-context-test-7706" to be "Succeeded or Failed" Aug 17 00:51:05.230: INFO: Pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08": Phase="Pending", Reason="", readiness=false. Elapsed: 12.295306ms Aug 17 00:51:07.234: INFO: Pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016496642s Aug 17 00:51:09.237: INFO: Pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019414776s Aug 17 00:51:11.713: INFO: Pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.495470062s Aug 17 00:51:11.713: INFO: Pod "busybox-readonly-false-b7db1fa6-0c9d-4fd3-a028-5805d59c5e08" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:51:11.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7706" for this suite. • [SLOW TEST:6.567 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":294,"completed":238,"skipped":3843,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:51:11.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-cba9d35e-b504-43fc-8155-49890ed31b2c in namespace container-probe-8484 Aug 17 00:51:16.901: INFO: Started pod busybox-cba9d35e-b504-43fc-8155-49890ed31b2c in namespace container-probe-8484 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 00:51:16.918: INFO: Initial restart count of pod busybox-cba9d35e-b504-43fc-8155-49890ed31b2c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:55:18.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8484" for this suite. • [SLOW TEST:247.062 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":239,"skipped":3845,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:55:18.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-27480f3e-40c0-433a-b640-9de85e524692 STEP: Creating a pod to test consume secrets Aug 17 00:55:18.932: INFO: Waiting up to 5m0s for pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26" in namespace "secrets-1036" to be "Succeeded or Failed" Aug 17 00:55:18.945: INFO: Pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 12.292998ms Aug 17 00:55:21.567: INFO: Pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634137944s Aug 17 00:55:23.570: INFO: Pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637611519s Aug 17 00:55:25.574: INFO: Pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.641291181s STEP: Saw pod success Aug 17 00:55:25.574: INFO: Pod "pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26" satisfied condition "Succeeded or Failed" Aug 17 00:55:25.577: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26 container secret-volume-test: STEP: delete the pod Aug 17 00:55:25.646: INFO: Waiting for pod pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26 to disappear Aug 17 00:55:25.651: INFO: Pod pod-secrets-a4af060a-f00c-4b98-8202-1f23e1da9d26 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:55:25.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1036" for this suite. • [SLOW TEST:6.874 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":240,"skipped":3849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:55:25.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8443 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 00:55:25.705: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 00:55:25.807: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:55:27.812: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:55:29.811: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 00:55:31.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:33.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:35.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:37.812: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:39.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:41.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:43.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:45.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 00:55:47.812: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 00:55:47.817: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 00:55:49.821: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 00:55:53.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.89:8080/dial?request=hostname&protocol=http&host=10.244.2.107&port=8080&tries=1'] Namespace:pod-network-test-8443 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:55:53.847: INFO: >>> kubeConfig: /root/.kube/config I0817 00:55:53.888192 7 log.go:181] (0xc002952580) (0xc003387b80) Create stream I0817 00:55:53.888229 7 log.go:181] (0xc002952580) (0xc003387b80) Stream added, broadcasting: 1 I0817 00:55:53.891153 7 log.go:181] (0xc002952580) Reply frame received for 1 I0817 00:55:53.891206 7 log.go:181] (0xc002952580) (0xc0016d8f00) Create stream I0817 00:55:53.891311 7 log.go:181] (0xc002952580) (0xc0016d8f00) Stream added, broadcasting: 3 I0817 00:55:53.892570 7 log.go:181] (0xc002952580) Reply frame received for 3 I0817 00:55:53.892625 7 log.go:181] (0xc002952580) (0xc0006fdb80) Create stream I0817 00:55:53.892642 7 log.go:181] (0xc002952580) (0xc0006fdb80) Stream added, broadcasting: 5 I0817 00:55:53.893931 7 log.go:181] (0xc002952580) Reply frame received for 5 I0817 00:55:53.990035 7 log.go:181] (0xc002952580) Data frame received for 3 I0817 00:55:53.990077 7 log.go:181] (0xc0016d8f00) (3) Data frame handling I0817 00:55:53.990105 7 log.go:181] (0xc0016d8f00) (3) Data frame sent I0817 00:55:53.990426 7 log.go:181] (0xc002952580) Data frame received for 3 I0817 00:55:53.990447 7 log.go:181] (0xc0016d8f00) (3) Data frame handling I0817 00:55:53.990579 7 log.go:181] (0xc002952580) Data frame received for 5 I0817 00:55:53.990597 7 log.go:181] (0xc0006fdb80) (5) Data frame handling I0817 00:55:53.992471 7 log.go:181] (0xc002952580) Data frame received for 1 I0817 00:55:53.992496 7 log.go:181] (0xc003387b80) (1) Data frame handling I0817 00:55:53.992509 7 log.go:181] (0xc003387b80) (1) Data frame sent I0817 00:55:53.992519 7 log.go:181] (0xc002952580) (0xc003387b80) Stream removed, broadcasting: 1 I0817 00:55:53.992529 7 log.go:181] (0xc002952580) Go away received I0817 00:55:53.992696 7 log.go:181] (0xc002952580) (0xc003387b80) Stream removed, broadcasting: 1 I0817 00:55:53.992817 7 log.go:181] (0xc002952580) (0xc0016d8f00) Stream removed, broadcasting: 3 I0817 00:55:53.992835 7 log.go:181] (0xc002952580) (0xc0006fdb80) Stream removed, broadcasting: 5 Aug 17 00:55:53.992: INFO: Waiting for responses: map[] Aug 17 00:55:53.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.89:8080/dial?request=hostname&protocol=http&host=10.244.1.88&port=8080&tries=1'] Namespace:pod-network-test-8443 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 00:55:53.995: INFO: >>> kubeConfig: /root/.kube/config I0817 00:55:54.027563 7 log.go:181] (0xc003383970) (0xc0016d9cc0) Create stream I0817 00:55:54.027589 7 log.go:181] (0xc003383970) (0xc0016d9cc0) Stream added, broadcasting: 1 I0817 00:55:54.030169 7 log.go:181] (0xc003383970) Reply frame received for 1 I0817 00:55:54.030206 7 log.go:181] (0xc003383970) (0xc0016d9d60) Create stream I0817 00:55:54.030214 7 log.go:181] (0xc003383970) (0xc0016d9d60) Stream added, broadcasting: 3 I0817 00:55:54.031352 7 log.go:181] (0xc003383970) Reply frame received for 3 I0817 00:55:54.031412 7 log.go:181] (0xc003383970) (0xc00257e1e0) Create stream I0817 00:55:54.031428 7 log.go:181] (0xc003383970) (0xc00257e1e0) Stream added, broadcasting: 5 I0817 00:55:54.032578 7 log.go:181] (0xc003383970) Reply frame received for 5 I0817 00:55:54.110244 7 log.go:181] (0xc003383970) Data frame received for 3 I0817 00:55:54.110282 7 log.go:181] (0xc0016d9d60) (3) Data frame handling I0817 00:55:54.110308 7 log.go:181] (0xc0016d9d60) (3) Data frame sent I0817 00:55:54.110841 7 log.go:181] (0xc003383970) Data frame received for 3 I0817 00:55:54.110933 7 log.go:181] (0xc0016d9d60) (3) Data frame handling I0817 00:55:54.110997 7 log.go:181] (0xc003383970) Data frame received for 5 I0817 00:55:54.111035 7 log.go:181] (0xc00257e1e0) (5) Data frame handling I0817 00:55:54.112383 7 log.go:181] (0xc003383970) Data frame received for 1 I0817 00:55:54.112406 7 log.go:181] (0xc0016d9cc0) (1) Data frame handling I0817 00:55:54.112424 7 log.go:181] (0xc0016d9cc0) (1) Data frame sent I0817 00:55:54.112438 7 log.go:181] (0xc003383970) (0xc0016d9cc0) Stream removed, broadcasting: 1 I0817 00:55:54.112453 7 log.go:181] (0xc003383970) Go away received I0817 00:55:54.112568 7 log.go:181] (0xc003383970) (0xc0016d9cc0) Stream removed, broadcasting: 1 I0817 00:55:54.112583 7 log.go:181] (0xc003383970) (0xc0016d9d60) Stream removed, broadcasting: 3 I0817 00:55:54.112589 7 log.go:181] (0xc003383970) (0xc00257e1e0) Stream removed, broadcasting: 5 Aug 17 00:55:54.112: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:55:54.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8443" for this suite. • [SLOW TEST:28.460 seconds] [sig-network] Networking /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":294,"completed":241,"skipped":3875,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:55:54.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 17 00:55:54.195: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 17 00:55:54.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:54.590: INFO: stderr: "" Aug 17 00:55:54.590: INFO: stdout: "service/agnhost-replica created\n" Aug 17 00:55:54.591: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 17 00:55:54.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:54.982: INFO: stderr: "" Aug 17 00:55:54.982: INFO: stdout: "service/agnhost-primary created\n" Aug 17 00:55:54.982: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 17 00:55:54.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:55.295: INFO: stderr: "" Aug 17 00:55:55.295: INFO: stdout: "service/frontend created\n" Aug 17 00:55:55.295: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 17 00:55:55.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:55.588: INFO: stderr: "" Aug 17 00:55:55.588: INFO: stdout: "deployment.apps/frontend created\n" Aug 17 00:55:55.588: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 17 00:55:55.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:55.967: INFO: stderr: "" Aug 17 00:55:55.967: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 17 00:55:55.967: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 17 00:55:55.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9553' Aug 17 00:55:56.290: INFO: stderr: "" Aug 17 00:55:56.290: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 17 00:55:56.290: INFO: Waiting for all frontend pods to be Running. Aug 17 00:56:11.341: INFO: Waiting for frontend to serve content. Aug 17 00:56:11.490: INFO: Trying to add a new entry to the guestbook. Aug 17 00:56:11.498: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 17 00:56:11.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:11.795: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:11.795: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 17 00:56:11.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:12.312: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:12.312: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 17 00:56:12.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:12.669: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:12.669: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 17 00:56:12.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:12.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:12.805: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 17 00:56:12.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:14.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:14.824: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 17 00:56:14.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9553' Aug 17 00:56:16.345: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 00:56:16.345: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:56:16.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9553" for this suite. • [SLOW TEST:23.402 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:350 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":294,"completed":242,"skipped":3880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:56:17.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 00:58:20.479: INFO: Deleting pod "var-expansion-a89023e8-9937-4278-a49b-22221f456a03" in namespace "var-expansion-8899" Aug 17 00:58:20.483: INFO: Wait up to 5m0s for pod "var-expansion-a89023e8-9937-4278-a49b-22221f456a03" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:58:26.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8899" for this suite. • [SLOW TEST:128.979 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":294,"completed":243,"skipped":3991,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:58:26.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-391/configmap-test-0110f97e-7306-4d53-81ae-4bb1991dfc40 STEP: Creating a pod to test consume configMaps Aug 17 00:58:26.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a" in namespace "configmap-391" to be "Succeeded or Failed" Aug 17 00:58:26.736: INFO: Pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.860313ms Aug 17 00:58:29.031: INFO: Pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298371844s Aug 17 00:58:31.035: INFO: Pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303088772s Aug 17 00:58:33.073: INFO: Pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.340423166s STEP: Saw pod success Aug 17 00:58:33.073: INFO: Pod "pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a" satisfied condition "Succeeded or Failed" Aug 17 00:58:33.075: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a container env-test: STEP: delete the pod Aug 17 00:58:33.611: INFO: Waiting for pod pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a to disappear Aug 17 00:58:33.743: INFO: Pod pod-configmaps-2bfc37af-eb12-4644-af1b-8ddda401b59a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:58:33.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-391" for this suite. • [SLOW TEST:7.270 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":244,"skipped":3995,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:58:33.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dst89 in namespace proxy-8560 I0817 00:58:33.889785 7 runners.go:190] Created replication controller with name: proxy-service-dst89, namespace: proxy-8560, replica count: 1 I0817 00:58:34.940188 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:58:35.940392 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:58:36.940594 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 00:58:37.940869 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 00:58:38.941032 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 00:58:39.941237 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 00:58:40.941436 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 00:58:41.941602 7 runners.go:190] proxy-service-dst89 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 00:58:41.943: INFO: setup took 8.123067263s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 5.276789ms) Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 5.410094ms) Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 5.516608ms) Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 5.955588ms) Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 5.926799ms) Aug 17 00:58:41.949: INFO: (0) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 5.954786ms) Aug 17 00:58:41.951: INFO: (0) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 8.192583ms) Aug 17 00:58:41.953: INFO: (0) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 9.602213ms) Aug 17 00:58:41.953: INFO: (0) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 9.661823ms) Aug 17 00:58:41.954: INFO: (0) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 10.527037ms) Aug 17 00:58:41.955: INFO: (0) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 11.718688ms) Aug 17 00:58:41.955: INFO: (0) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 12.051691ms) Aug 17 00:58:41.955: INFO: (0) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 12.149159ms) Aug 17 00:58:41.959: INFO: (0) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 15.40048ms) Aug 17 00:58:41.959: INFO: (0) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 15.463009ms) Aug 17 00:58:41.959: INFO: (0) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 3.038164ms) Aug 17 00:58:41.962: INFO: (1) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 3.145876ms) Aug 17 00:58:41.962: INFO: (1) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 3.07284ms) Aug 17 00:58:41.974: INFO: (1) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 15.042349ms) Aug 17 00:58:41.975: INFO: (1) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 15.285351ms) Aug 17 00:58:41.975: INFO: (1) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 16.061333ms) Aug 17 00:58:41.976: INFO: (1) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 16.229131ms) Aug 17 00:58:41.976: INFO: (1) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 16.159002ms) Aug 17 00:58:41.976: INFO: (1) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 16.210168ms) Aug 17 00:58:41.976: INFO: (1) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 16.272747ms) Aug 17 00:58:41.976: INFO: (1) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 16.264581ms) Aug 17 00:58:41.978: INFO: (2) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 1.861184ms) Aug 17 00:58:41.978: INFO: (2) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 2.519032ms) Aug 17 00:58:41.980: INFO: (2) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 4.114785ms) Aug 17 00:58:41.981: INFO: (2) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 4.929219ms) Aug 17 00:58:41.981: INFO: (2) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 4.930101ms) Aug 17 00:58:41.981: INFO: (2) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: ... (200; 3.922571ms) Aug 17 00:58:41.986: INFO: (3) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 4.130284ms) Aug 17 00:58:41.986: INFO: (3) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 4.196962ms) Aug 17 00:58:41.986: INFO: (3) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 4.082814ms) Aug 17 00:58:41.986: INFO: (3) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 5.138746ms) Aug 17 00:58:41.992: INFO: (4) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.191783ms) Aug 17 00:58:41.992: INFO: (4) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 5.204096ms) Aug 17 00:58:41.992: INFO: (4) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 5.217509ms) Aug 17 00:58:41.992: INFO: (4) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 5.268497ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 20.597821ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 20.59293ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 20.583942ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 20.69909ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 20.719505ms) Aug 17 00:58:42.013: INFO: (5) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 20.824531ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 21.473045ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 21.473142ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 21.498194ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 21.526095ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 21.536001ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 21.488245ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 21.654097ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 21.64321ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 21.684851ms) Aug 17 00:58:42.014: INFO: (5) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 2.070582ms) Aug 17 00:58:42.017: INFO: (6) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 2.989111ms) Aug 17 00:58:42.018: INFO: (6) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 4.010547ms) Aug 17 00:58:42.019: INFO: (6) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 4.48179ms) Aug 17 00:58:42.019: INFO: (6) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 4.77838ms) Aug 17 00:58:42.019: INFO: (6) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 4.998172ms) Aug 17 00:58:42.019: INFO: (6) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 4.95652ms) Aug 17 00:58:42.019: INFO: (6) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 5.783006ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: ... (200; 7.101413ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 7.093933ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 7.135421ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 7.113803ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 7.180307ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 7.220139ms) Aug 17 00:58:42.027: INFO: (7) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 7.293094ms) Aug 17 00:58:42.028: INFO: (7) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 7.716801ms) Aug 17 00:58:42.028: INFO: (7) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 7.902293ms) Aug 17 00:58:42.028: INFO: (7) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 8.155346ms) Aug 17 00:58:42.029: INFO: (7) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 9.075398ms) Aug 17 00:58:42.029: INFO: (7) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 9.170349ms) Aug 17 00:58:42.032: INFO: (8) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: ... (200; 4.846381ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 4.992421ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 4.976019ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 4.970214ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 5.026293ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.075483ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 5.052552ms) Aug 17 00:58:42.034: INFO: (8) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 5.077735ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 3.232613ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.351818ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.201147ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 3.24845ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 3.286807ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 3.244157ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 3.358182ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 3.291885ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 3.529981ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 3.856858ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 4.02178ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 4.062404ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 4.0109ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 4.004065ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 4.064308ms) Aug 17 00:58:42.038: INFO: (9) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 6.27788ms) Aug 17 00:58:42.045: INFO: (10) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 6.500068ms) Aug 17 00:58:42.045: INFO: (10) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 6.74848ms) Aug 17 00:58:42.045: INFO: (10) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 6.719786ms) Aug 17 00:58:42.045: INFO: (10) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 6.669757ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 7.040517ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 7.024375ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 7.108492ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 7.138586ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 7.141535ms) Aug 17 00:58:42.046: INFO: (10) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 7.10928ms) Aug 17 00:58:42.048: INFO: (11) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 2.245296ms) Aug 17 00:58:42.048: INFO: (11) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 2.37307ms) Aug 17 00:58:42.049: INFO: (11) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 2.869853ms) Aug 17 00:58:42.049: INFO: (11) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 3.03664ms) Aug 17 00:58:42.049: INFO: (11) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.179754ms) Aug 17 00:58:42.049: INFO: (11) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 3.319066ms) Aug 17 00:58:42.049: INFO: (11) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: ... (200; 3.410096ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 3.848667ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 4.043102ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 4.096006ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 4.21379ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 4.392481ms) Aug 17 00:58:42.050: INFO: (11) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 4.597069ms) Aug 17 00:58:42.054: INFO: (12) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 3.665046ms) Aug 17 00:58:42.054: INFO: (12) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 3.718673ms) Aug 17 00:58:42.054: INFO: (12) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 4.955503ms) Aug 17 00:58:42.055: INFO: (12) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 5.0297ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.046988ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 5.106096ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 5.238882ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 5.173862ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 5.169666ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 5.178143ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 5.172893ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 5.226916ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 5.373175ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.394543ms) Aug 17 00:58:42.056: INFO: (12) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 5.712247ms) Aug 17 00:58:42.059: INFO: (13) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 2.364764ms) Aug 17 00:58:42.059: INFO: (13) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 2.617388ms) Aug 17 00:58:42.060: INFO: (13) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 3.964308ms) Aug 17 00:58:42.061: INFO: (13) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 4.192603ms) Aug 17 00:58:42.061: INFO: (13) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 4.511633ms) Aug 17 00:58:42.061: INFO: (13) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 4.628291ms) Aug 17 00:58:42.061: INFO: (13) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 4.664003ms) Aug 17 00:58:42.061: INFO: (13) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 4.647692ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 5.238526ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.101435ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 5.197101ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 5.228903ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 5.21801ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 5.317433ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 5.330425ms) Aug 17 00:58:42.062: INFO: (13) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 3.198859ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 3.402106ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.97999ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 4.204113ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 4.30073ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 4.170257ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 4.17784ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 4.332435ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 4.560156ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 4.561245ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 4.478187ms) Aug 17 00:58:42.066: INFO: (14) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 4.741477ms) Aug 17 00:58:42.067: INFO: (14) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 4.652701ms) Aug 17 00:58:42.067: INFO: (14) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 4.551442ms) Aug 17 00:58:42.067: INFO: (14) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 4.772716ms) Aug 17 00:58:42.067: INFO: (14) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 3.689993ms) Aug 17 00:58:42.071: INFO: (15) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 3.922582ms) Aug 17 00:58:42.071: INFO: (15) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 4.377755ms) Aug 17 00:58:42.072: INFO: (15) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 4.961693ms) Aug 17 00:58:42.072: INFO: (15) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 5.776062ms) Aug 17 00:58:42.073: INFO: (15) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 5.94243ms) Aug 17 00:58:42.074: INFO: (16) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 1.59911ms) Aug 17 00:58:42.075: INFO: (16) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 1.650031ms) Aug 17 00:58:42.075: INFO: (16) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 2.005759ms) Aug 17 00:58:42.075: INFO: (16) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 2.158954ms) Aug 17 00:58:42.075: INFO: (16) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 1.854399ms) Aug 17 00:58:42.076: INFO: (16) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 3.16503ms) Aug 17 00:58:42.076: INFO: (16) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 2.939341ms) Aug 17 00:58:42.077: INFO: (16) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 3.841834ms) Aug 17 00:58:42.077: INFO: (16) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 3.362804ms) Aug 17 00:58:42.077: INFO: (16) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 4.493631ms) Aug 17 00:58:42.080: INFO: (17) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 2.111857ms) Aug 17 00:58:42.080: INFO: (17) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 2.392535ms) Aug 17 00:58:42.080: INFO: (17) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 2.445024ms) Aug 17 00:58:42.083: INFO: (17) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 5.108463ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 5.490951ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 5.823802ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 5.82994ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 5.867447ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 5.93197ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 5.953474ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 6.203152ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 6.274329ms) Aug 17 00:58:42.084: INFO: (17) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 6.273085ms) Aug 17 00:58:42.087: INFO: (18) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 2.987572ms) Aug 17 00:58:42.087: INFO: (18) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:160/proxy/: foo (200; 3.005883ms) Aug 17 00:58:42.087: INFO: (18) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 3.008186ms) Aug 17 00:58:42.087: INFO: (18) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.160075ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:1080/proxy/: test<... (200; 3.05961ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:460/proxy/: tls baz (200; 3.06993ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 3.146883ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:462/proxy/: tls qux (200; 3.171326ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb/proxy/: test (200; 3.23643ms) Aug 17 00:58:42.088: INFO: (18) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test (200; 8.546729ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:160/proxy/: foo (200; 8.520768ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:1080/proxy/: ... (200; 8.687616ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/pods/https:proxy-service-dst89-nk6mb:443/proxy/: test<... (200; 8.795769ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/pods/http:proxy-service-dst89-nk6mb:162/proxy/: bar (200; 8.973817ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/pods/proxy-service-dst89-nk6mb:162/proxy/: bar (200; 9.127905ms) Aug 17 00:58:42.098: INFO: (19) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname2/proxy/: bar (200; 9.096363ms) Aug 17 00:58:42.100: INFO: (19) /api/v1/namespaces/proxy-8560/services/http:proxy-service-dst89:portname1/proxy/: foo (200; 10.919192ms) Aug 17 00:58:42.100: INFO: (19) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname2/proxy/: bar (200; 11.13405ms) Aug 17 00:58:42.101: INFO: (19) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname1/proxy/: tls baz (200; 11.564823ms) Aug 17 00:58:42.101: INFO: (19) /api/v1/namespaces/proxy-8560/services/https:proxy-service-dst89:tlsportname2/proxy/: tls qux (200; 11.523145ms) Aug 17 00:58:42.101: INFO: (19) /api/v1/namespaces/proxy-8560/services/proxy-service-dst89:portname1/proxy/: foo (200; 11.525791ms) STEP: deleting ReplicationController proxy-service-dst89 in namespace proxy-8560, will wait for the garbage collector to delete the pods Aug 17 00:58:42.156: INFO: Deleting ReplicationController proxy-service-dst89 took: 3.996687ms Aug 17 00:58:42.557: INFO: Terminating ReplicationController proxy-service-dst89 pods took: 400.196471ms [AfterEach] version v1 /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:58:50.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8560" for this suite. • [SLOW TEST:16.289 seconds] [sig-network] Proxy /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":294,"completed":245,"skipped":3996,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:58:50.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-70bcd878-949c-4aef-837f-cce4dab2b768 STEP: Creating a pod to test consume configMaps Aug 17 00:58:50.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467" in namespace "configmap-7530" to be "Succeeded or Failed" Aug 17 00:58:50.166: INFO: Pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467": Phase="Pending", Reason="", readiness=false. Elapsed: 20.841534ms Aug 17 00:58:52.169: INFO: Pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023312906s Aug 17 00:58:54.253: INFO: Pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467": Phase="Running", Reason="", readiness=true. Elapsed: 4.107632386s Aug 17 00:58:56.257: INFO: Pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112014057s STEP: Saw pod success Aug 17 00:58:56.258: INFO: Pod "pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467" satisfied condition "Succeeded or Failed" Aug 17 00:58:56.270: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467 container configmap-volume-test: STEP: delete the pod Aug 17 00:58:56.567: INFO: Waiting for pod pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467 to disappear Aug 17 00:58:56.654: INFO: Pod pod-configmaps-9bd14940-263f-425a-a103-61c90d22d467 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:58:56.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7530" for this suite. • [SLOW TEST:6.680 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":246,"skipped":4003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:58:56.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 00:58:57.690: INFO: starting watch STEP: patching STEP: updating Aug 17 00:58:57.715: INFO: waiting for watch events with expected annotations Aug 17 00:58:57.715: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 00:58:58.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-3051" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":294,"completed":247,"skipped":4040,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 00:58:58.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2591 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 17 00:58:58.228: INFO: Found 0 stateful pods, waiting for 3 Aug 17 00:59:08.232: INFO: Found 2 stateful pods, waiting for 3 Aug 17 00:59:18.231: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:59:18.231: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:59:18.231: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Aug 17 00:59:28.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:59:28.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:59:28.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 17 00:59:28.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2591 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 00:59:28.484: INFO: stderr: "I0817 00:59:28.356128 2890 log.go:181] (0xc000b96b00) (0xc000823360) Create stream\nI0817 00:59:28.356168 2890 log.go:181] (0xc000b96b00) (0xc000823360) Stream added, broadcasting: 1\nI0817 00:59:28.358429 2890 log.go:181] (0xc000b96b00) Reply frame received for 1\nI0817 00:59:28.358487 2890 log.go:181] (0xc000b96b00) (0xc000764140) Create stream\nI0817 00:59:28.358500 2890 log.go:181] (0xc000b96b00) (0xc000764140) Stream added, broadcasting: 3\nI0817 00:59:28.359725 2890 log.go:181] (0xc000b96b00) Reply frame received for 3\nI0817 00:59:28.359784 2890 log.go:181] (0xc000b96b00) (0xc00075c140) Create stream\nI0817 00:59:28.359801 2890 log.go:181] (0xc000b96b00) (0xc00075c140) Stream added, broadcasting: 5\nI0817 00:59:28.360562 2890 log.go:181] (0xc000b96b00) Reply frame received for 5\nI0817 00:59:28.415955 2890 log.go:181] (0xc000b96b00) Data frame received for 5\nI0817 00:59:28.415973 2890 log.go:181] (0xc00075c140) (5) Data frame handling\nI0817 00:59:28.415988 2890 log.go:181] (0xc00075c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 00:59:28.477535 2890 log.go:181] (0xc000b96b00) Data frame received for 3\nI0817 00:59:28.477554 2890 log.go:181] (0xc000764140) (3) Data frame handling\nI0817 00:59:28.477561 2890 log.go:181] (0xc000764140) (3) Data frame sent\nI0817 00:59:28.477878 2890 log.go:181] (0xc000b96b00) Data frame received for 5\nI0817 00:59:28.477888 2890 log.go:181] (0xc00075c140) (5) Data frame handling\nI0817 00:59:28.477928 2890 log.go:181] (0xc000b96b00) Data frame received for 3\nI0817 00:59:28.477967 2890 log.go:181] (0xc000764140) (3) Data frame handling\nI0817 00:59:28.479025 2890 log.go:181] (0xc000b96b00) Data frame received for 1\nI0817 00:59:28.479063 2890 log.go:181] (0xc000823360) (1) Data frame handling\nI0817 00:59:28.479079 2890 log.go:181] (0xc000823360) (1) Data frame sent\nI0817 00:59:28.479099 2890 log.go:181] (0xc000b96b00) (0xc000823360) Stream removed, broadcasting: 1\nI0817 00:59:28.479118 2890 log.go:181] (0xc000b96b00) Go away received\nI0817 00:59:28.479313 2890 log.go:181] (0xc000b96b00) (0xc000823360) Stream removed, broadcasting: 1\nI0817 00:59:28.479325 2890 log.go:181] (0xc000b96b00) (0xc000764140) Stream removed, broadcasting: 3\nI0817 00:59:28.479333 2890 log.go:181] (0xc000b96b00) (0xc00075c140) Stream removed, broadcasting: 5\n" Aug 17 00:59:28.484: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 00:59:28.484: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 17 00:59:39.183: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 17 00:59:50.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2591 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 00:59:50.302: INFO: stderr: "I0817 00:59:50.199126 2903 log.go:181] (0xc000ea2420) (0xc0008755e0) Create stream\nI0817 00:59:50.199177 2903 log.go:181] (0xc000ea2420) (0xc0008755e0) Stream added, broadcasting: 1\nI0817 00:59:50.200596 2903 log.go:181] (0xc000ea2420) Reply frame received for 1\nI0817 00:59:50.200612 2903 log.go:181] (0xc000ea2420) (0xc000875f40) Create stream\nI0817 00:59:50.200618 2903 log.go:181] (0xc000ea2420) (0xc000875f40) Stream added, broadcasting: 3\nI0817 00:59:50.201311 2903 log.go:181] (0xc000ea2420) Reply frame received for 3\nI0817 00:59:50.201325 2903 log.go:181] (0xc000ea2420) (0xc0002acdc0) Create stream\nI0817 00:59:50.201333 2903 log.go:181] (0xc000ea2420) (0xc0002acdc0) Stream added, broadcasting: 5\nI0817 00:59:50.201868 2903 log.go:181] (0xc000ea2420) Reply frame received for 5\nI0817 00:59:50.294495 2903 log.go:181] (0xc000ea2420) Data frame received for 5\nI0817 00:59:50.294532 2903 log.go:181] (0xc0002acdc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 00:59:50.294544 2903 log.go:181] (0xc000ea2420) Data frame received for 3\nI0817 00:59:50.294569 2903 log.go:181] (0xc000875f40) (3) Data frame handling\nI0817 00:59:50.294587 2903 log.go:181] (0xc000875f40) (3) Data frame sent\nI0817 00:59:50.294665 2903 log.go:181] (0xc0002acdc0) (5) Data frame sent\nI0817 00:59:50.294680 2903 log.go:181] (0xc000ea2420) Data frame received for 5\nI0817 00:59:50.294686 2903 log.go:181] (0xc0002acdc0) (5) Data frame handling\nI0817 00:59:50.294786 2903 log.go:181] (0xc000ea2420) Data frame received for 3\nI0817 00:59:50.294800 2903 log.go:181] (0xc000875f40) (3) Data frame handling\nI0817 00:59:50.297143 2903 log.go:181] (0xc000ea2420) Data frame received for 1\nI0817 00:59:50.297169 2903 log.go:181] (0xc0008755e0) (1) Data frame handling\nI0817 00:59:50.297192 2903 log.go:181] (0xc0008755e0) (1) Data frame sent\nI0817 00:59:50.297251 2903 log.go:181] (0xc000ea2420) (0xc0008755e0) Stream removed, broadcasting: 1\nI0817 00:59:50.297275 2903 log.go:181] (0xc000ea2420) Go away received\nI0817 00:59:50.297571 2903 log.go:181] (0xc000ea2420) (0xc0008755e0) Stream removed, broadcasting: 1\nI0817 00:59:50.297597 2903 log.go:181] (0xc000ea2420) (0xc000875f40) Stream removed, broadcasting: 3\nI0817 00:59:50.297607 2903 log.go:181] (0xc000ea2420) (0xc0002acdc0) Stream removed, broadcasting: 5\n" Aug 17 00:59:50.302: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 00:59:50.302: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 01:00:00.319: INFO: Waiting for StatefulSet statefulset-2591/ss2 to complete update Aug 17 01:00:00.319: INFO: Waiting for Pod statefulset-2591/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 01:00:00.319: INFO: Waiting for Pod statefulset-2591/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 01:00:10.373: INFO: Waiting for StatefulSet statefulset-2591/ss2 to complete update Aug 17 01:00:10.373: INFO: Waiting for Pod statefulset-2591/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 01:00:20.327: INFO: Waiting for StatefulSet statefulset-2591/ss2 to complete update STEP: Rolling back to a previous revision Aug 17 01:00:30.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2591 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 01:00:30.824: INFO: stderr: "I0817 01:00:30.447095 2920 log.go:181] (0xc00003b6b0) (0xc000c819a0) Create stream\nI0817 01:00:30.447129 2920 log.go:181] (0xc00003b6b0) (0xc000c819a0) Stream added, broadcasting: 1\nI0817 01:00:30.449930 2920 log.go:181] (0xc00003b6b0) Reply frame received for 1\nI0817 01:00:30.449959 2920 log.go:181] (0xc00003b6b0) (0xc000b04780) Create stream\nI0817 01:00:30.449968 2920 log.go:181] (0xc00003b6b0) (0xc000b04780) Stream added, broadcasting: 3\nI0817 01:00:30.450741 2920 log.go:181] (0xc00003b6b0) Reply frame received for 3\nI0817 01:00:30.450775 2920 log.go:181] (0xc00003b6b0) (0xc000c6d220) Create stream\nI0817 01:00:30.450792 2920 log.go:181] (0xc00003b6b0) (0xc000c6d220) Stream added, broadcasting: 5\nI0817 01:00:30.451455 2920 log.go:181] (0xc00003b6b0) Reply frame received for 5\nI0817 01:00:30.512961 2920 log.go:181] (0xc00003b6b0) Data frame received for 5\nI0817 01:00:30.512988 2920 log.go:181] (0xc000c6d220) (5) Data frame handling\nI0817 01:00:30.513006 2920 log.go:181] (0xc000c6d220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 01:00:30.814443 2920 log.go:181] (0xc00003b6b0) Data frame received for 3\nI0817 01:00:30.814469 2920 log.go:181] (0xc000b04780) (3) Data frame handling\nI0817 01:00:30.814482 2920 log.go:181] (0xc000b04780) (3) Data frame sent\nI0817 01:00:30.814494 2920 log.go:181] (0xc00003b6b0) Data frame received for 3\nI0817 01:00:30.814504 2920 log.go:181] (0xc000b04780) (3) Data frame handling\nI0817 01:00:30.814702 2920 log.go:181] (0xc00003b6b0) Data frame received for 5\nI0817 01:00:30.814719 2920 log.go:181] (0xc000c6d220) (5) Data frame handling\nI0817 01:00:30.815764 2920 log.go:181] (0xc00003b6b0) Data frame received for 1\nI0817 01:00:30.815778 2920 log.go:181] (0xc000c819a0) (1) Data frame handling\nI0817 01:00:30.815790 2920 log.go:181] (0xc000c819a0) (1) Data frame sent\nI0817 01:00:30.816011 2920 log.go:181] (0xc00003b6b0) (0xc000c819a0) Stream removed, broadcasting: 1\nI0817 01:00:30.816032 2920 log.go:181] (0xc00003b6b0) Go away received\nI0817 01:00:30.816226 2920 log.go:181] (0xc00003b6b0) (0xc000c819a0) Stream removed, broadcasting: 1\nI0817 01:00:30.816237 2920 log.go:181] (0xc00003b6b0) (0xc000b04780) Stream removed, broadcasting: 3\nI0817 01:00:30.816242 2920 log.go:181] (0xc00003b6b0) (0xc000c6d220) Stream removed, broadcasting: 5\n" Aug 17 01:00:30.824: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 01:00:30.824: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 01:00:40.857: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 17 01:00:50.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2591 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 01:00:51.150: INFO: stderr: "I0817 01:00:51.070912 2938 log.go:181] (0xc00003a0b0) (0xc00037fcc0) Create stream\nI0817 01:00:51.070980 2938 log.go:181] (0xc00003a0b0) (0xc00037fcc0) Stream added, broadcasting: 1\nI0817 01:00:51.072628 2938 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0817 01:00:51.072663 2938 log.go:181] (0xc00003a0b0) (0xc0001ffa40) Create stream\nI0817 01:00:51.072673 2938 log.go:181] (0xc00003a0b0) (0xc0001ffa40) Stream added, broadcasting: 3\nI0817 01:00:51.073543 2938 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0817 01:00:51.073587 2938 log.go:181] (0xc00003a0b0) (0xc00019c3c0) Create stream\nI0817 01:00:51.073607 2938 log.go:181] (0xc00003a0b0) (0xc00019c3c0) Stream added, broadcasting: 5\nI0817 01:00:51.074342 2938 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0817 01:00:51.131560 2938 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0817 01:00:51.131585 2938 log.go:181] (0xc00019c3c0) (5) Data frame handling\nI0817 01:00:51.131597 2938 log.go:181] (0xc00019c3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 01:00:51.140117 2938 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0817 01:00:51.140166 2938 log.go:181] (0xc0001ffa40) (3) Data frame handling\nI0817 01:00:51.140188 2938 log.go:181] (0xc0001ffa40) (3) Data frame sent\nI0817 01:00:51.140217 2938 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0817 01:00:51.140225 2938 log.go:181] (0xc00019c3c0) (5) Data frame handling\nI0817 01:00:51.140424 2938 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0817 01:00:51.140437 2938 log.go:181] (0xc0001ffa40) (3) Data frame handling\nI0817 01:00:51.141697 2938 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0817 01:00:51.141714 2938 log.go:181] (0xc00037fcc0) (1) Data frame handling\nI0817 01:00:51.141723 2938 log.go:181] (0xc00037fcc0) (1) Data frame sent\nI0817 01:00:51.141833 2938 log.go:181] (0xc00003a0b0) (0xc00037fcc0) Stream removed, broadcasting: 1\nI0817 01:00:51.141882 2938 log.go:181] (0xc00003a0b0) Go away received\nI0817 01:00:51.142220 2938 log.go:181] (0xc00003a0b0) (0xc00037fcc0) Stream removed, broadcasting: 1\nI0817 01:00:51.142233 2938 log.go:181] (0xc00003a0b0) (0xc0001ffa40) Stream removed, broadcasting: 3\nI0817 01:00:51.142240 2938 log.go:181] (0xc00003a0b0) (0xc00019c3c0) Stream removed, broadcasting: 5\n" Aug 17 01:00:51.150: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 01:00:51.150: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 01:01:21.169: INFO: Waiting for StatefulSet statefulset-2591/ss2 to complete update Aug 17 01:01:21.169: INFO: Waiting for Pod statefulset-2591/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 01:01:31.176: INFO: Waiting for StatefulSet statefulset-2591/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 01:01:41.176: INFO: Deleting all statefulset in ns statefulset-2591 Aug 17 01:01:41.178: INFO: Scaling statefulset ss2 to 0 Aug 17 01:02:11.441: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 01:02:11.444: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:02:11.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2591" for this suite. • [SLOW TEST:193.852 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":294,"completed":248,"skipped":4047,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:02:11.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1234 Aug 17 01:02:18.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 17 01:02:27.970: INFO: stderr: "I0817 01:02:27.876539 2956 log.go:181] (0xc00003abb0) (0xc000c83680) Create stream\nI0817 01:02:27.876606 2956 log.go:181] (0xc00003abb0) (0xc000c83680) Stream added, broadcasting: 1\nI0817 01:02:27.883476 2956 log.go:181] (0xc00003abb0) Reply frame received for 1\nI0817 01:02:27.883509 2956 log.go:181] (0xc00003abb0) (0xc000374460) Create stream\nI0817 01:02:27.883516 2956 log.go:181] (0xc00003abb0) (0xc000374460) Stream added, broadcasting: 3\nI0817 01:02:27.884416 2956 log.go:181] (0xc00003abb0) Reply frame received for 3\nI0817 01:02:27.884446 2956 log.go:181] (0xc00003abb0) (0xc000375400) Create stream\nI0817 01:02:27.884455 2956 log.go:181] (0xc00003abb0) (0xc000375400) Stream added, broadcasting: 5\nI0817 01:02:27.885304 2956 log.go:181] (0xc00003abb0) Reply frame received for 5\nI0817 01:02:27.960606 2956 log.go:181] (0xc00003abb0) Data frame received for 5\nI0817 01:02:27.960643 2956 log.go:181] (0xc000375400) (5) Data frame handling\nI0817 01:02:27.960678 2956 log.go:181] (0xc000375400) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0817 01:02:27.962610 2956 log.go:181] (0xc00003abb0) Data frame received for 3\nI0817 01:02:27.962629 2956 log.go:181] (0xc000374460) (3) Data frame handling\nI0817 01:02:27.962641 2956 log.go:181] (0xc000374460) (3) Data frame sent\nI0817 01:02:27.963035 2956 log.go:181] (0xc00003abb0) Data frame received for 3\nI0817 01:02:27.963050 2956 log.go:181] (0xc000374460) (3) Data frame handling\nI0817 01:02:27.963064 2956 log.go:181] (0xc00003abb0) Data frame received for 5\nI0817 01:02:27.963080 2956 log.go:181] (0xc000375400) (5) Data frame handling\nI0817 01:02:27.964689 2956 log.go:181] (0xc00003abb0) Data frame received for 1\nI0817 01:02:27.964702 2956 log.go:181] (0xc000c83680) (1) Data frame handling\nI0817 01:02:27.964778 2956 log.go:181] (0xc000c83680) (1) Data frame sent\nI0817 01:02:27.964791 2956 log.go:181] (0xc00003abb0) (0xc000c83680) Stream removed, broadcasting: 1\nI0817 01:02:27.964920 2956 log.go:181] (0xc00003abb0) Go away received\nI0817 01:02:27.965004 2956 log.go:181] (0xc00003abb0) (0xc000c83680) Stream removed, broadcasting: 1\nI0817 01:02:27.965013 2956 log.go:181] (0xc00003abb0) (0xc000374460) Stream removed, broadcasting: 3\nI0817 01:02:27.965018 2956 log.go:181] (0xc00003abb0) (0xc000375400) Stream removed, broadcasting: 5\n" Aug 17 01:02:27.970: INFO: stdout: "iptables" Aug 17 01:02:27.970: INFO: proxyMode: iptables Aug 17 01:02:27.976: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 01:02:27.987: INFO: Pod kube-proxy-mode-detector still exists Aug 17 01:02:29.987: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 01:02:29.992: INFO: Pod kube-proxy-mode-detector still exists Aug 17 01:02:31.987: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 01:02:31.990: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1234 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1234 I0817 01:02:32.052805 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1234, replica count: 3 I0817 01:02:35.103328 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:02:38.103585 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:02:41.103834 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 01:02:41.114: INFO: Creating new exec pod Aug 17 01:02:46.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 17 01:02:46.381: INFO: stderr: "I0817 01:02:46.290411 2974 log.go:181] (0xc000ac2e70) (0xc000e1a320) Create stream\nI0817 01:02:46.290466 2974 log.go:181] (0xc000ac2e70) (0xc000e1a320) Stream added, broadcasting: 1\nI0817 01:02:46.295352 2974 log.go:181] (0xc000ac2e70) Reply frame received for 1\nI0817 01:02:46.295394 2974 log.go:181] (0xc000ac2e70) (0xc000a67220) Create stream\nI0817 01:02:46.295405 2974 log.go:181] (0xc000ac2e70) (0xc000a67220) Stream added, broadcasting: 3\nI0817 01:02:46.296394 2974 log.go:181] (0xc000ac2e70) Reply frame received for 3\nI0817 01:02:46.296433 2974 log.go:181] (0xc000ac2e70) (0xc00063abe0) Create stream\nI0817 01:02:46.296448 2974 log.go:181] (0xc000ac2e70) (0xc00063abe0) Stream added, broadcasting: 5\nI0817 01:02:46.297431 2974 log.go:181] (0xc000ac2e70) Reply frame received for 5\nI0817 01:02:46.370851 2974 log.go:181] (0xc000ac2e70) Data frame received for 5\nI0817 01:02:46.370881 2974 log.go:181] (0xc00063abe0) (5) Data frame handling\nI0817 01:02:46.370900 2974 log.go:181] (0xc00063abe0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0817 01:02:46.371465 2974 log.go:181] (0xc000ac2e70) Data frame received for 5\nI0817 01:02:46.371487 2974 log.go:181] (0xc00063abe0) (5) Data frame handling\nI0817 01:02:46.371503 2974 log.go:181] (0xc00063abe0) (5) Data frame sent\nI0817 01:02:46.371514 2974 log.go:181] (0xc000ac2e70) Data frame received for 5\nI0817 01:02:46.371526 2974 log.go:181] (0xc00063abe0) (5) Data frame handling\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0817 01:02:46.371701 2974 log.go:181] (0xc000ac2e70) Data frame received for 3\nI0817 01:02:46.371722 2974 log.go:181] (0xc000a67220) (3) Data frame handling\nI0817 01:02:46.373923 2974 log.go:181] (0xc000ac2e70) Data frame received for 1\nI0817 01:02:46.373944 2974 log.go:181] (0xc000e1a320) (1) Data frame handling\nI0817 01:02:46.373953 2974 log.go:181] (0xc000e1a320) (1) Data frame sent\nI0817 01:02:46.373997 2974 log.go:181] (0xc000ac2e70) (0xc000e1a320) Stream removed, broadcasting: 1\nI0817 01:02:46.374062 2974 log.go:181] (0xc000ac2e70) Go away received\nI0817 01:02:46.374301 2974 log.go:181] (0xc000ac2e70) (0xc000e1a320) Stream removed, broadcasting: 1\nI0817 01:02:46.374319 2974 log.go:181] (0xc000ac2e70) (0xc000a67220) Stream removed, broadcasting: 3\nI0817 01:02:46.374327 2974 log.go:181] (0xc000ac2e70) (0xc00063abe0) Stream removed, broadcasting: 5\n" Aug 17 01:02:46.381: INFO: stdout: "" Aug 17 01:02:46.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c nc -zv -t -w 2 10.110.105.45 80' Aug 17 01:02:46.578: INFO: stderr: "I0817 01:02:46.501676 2992 log.go:181] (0xc0006eafd0) (0xc000aef360) Create stream\nI0817 01:02:46.501728 2992 log.go:181] (0xc0006eafd0) (0xc000aef360) Stream added, broadcasting: 1\nI0817 01:02:46.505520 2992 log.go:181] (0xc0006eafd0) Reply frame received for 1\nI0817 01:02:46.505573 2992 log.go:181] (0xc0006eafd0) (0xc00094adc0) Create stream\nI0817 01:02:46.505596 2992 log.go:181] (0xc0006eafd0) (0xc00094adc0) Stream added, broadcasting: 3\nI0817 01:02:46.506338 2992 log.go:181] (0xc0006eafd0) Reply frame received for 3\nI0817 01:02:46.506377 2992 log.go:181] (0xc0006eafd0) (0xc0009326e0) Create stream\nI0817 01:02:46.506391 2992 log.go:181] (0xc0006eafd0) (0xc0009326e0) Stream added, broadcasting: 5\nI0817 01:02:46.508034 2992 log.go:181] (0xc0006eafd0) Reply frame received for 5\nI0817 01:02:46.571699 2992 log.go:181] (0xc0006eafd0) Data frame received for 5\nI0817 01:02:46.571718 2992 log.go:181] (0xc0009326e0) (5) Data frame handling\nI0817 01:02:46.571724 2992 log.go:181] (0xc0009326e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.105.45 80\nConnection to 10.110.105.45 80 port [tcp/http] succeeded!\nI0817 01:02:46.571744 2992 log.go:181] (0xc0006eafd0) Data frame received for 3\nI0817 01:02:46.571763 2992 log.go:181] (0xc00094adc0) (3) Data frame handling\nI0817 01:02:46.571794 2992 log.go:181] (0xc0006eafd0) Data frame received for 5\nI0817 01:02:46.571811 2992 log.go:181] (0xc0009326e0) (5) Data frame handling\nI0817 01:02:46.572444 2992 log.go:181] (0xc0006eafd0) Data frame received for 1\nI0817 01:02:46.572452 2992 log.go:181] (0xc000aef360) (1) Data frame handling\nI0817 01:02:46.572462 2992 log.go:181] (0xc000aef360) (1) Data frame sent\nI0817 01:02:46.572469 2992 log.go:181] (0xc0006eafd0) (0xc000aef360) Stream removed, broadcasting: 1\nI0817 01:02:46.572656 2992 log.go:181] (0xc0006eafd0) Go away received\nI0817 01:02:46.572717 2992 log.go:181] (0xc0006eafd0) (0xc000aef360) Stream removed, broadcasting: 1\nI0817 01:02:46.572778 2992 log.go:181] (0xc0006eafd0) (0xc00094adc0) Stream removed, broadcasting: 3\nI0817 01:02:46.572784 2992 log.go:181] (0xc0006eafd0) (0xc0009326e0) Stream removed, broadcasting: 5\n" Aug 17 01:02:46.578: INFO: stdout: "" Aug 17 01:02:46.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30266' Aug 17 01:02:46.808: INFO: stderr: "I0817 01:02:46.716000 3011 log.go:181] (0xc000fbafd0) (0xc000b69c20) Create stream\nI0817 01:02:46.716062 3011 log.go:181] (0xc000fbafd0) (0xc000b69c20) Stream added, broadcasting: 1\nI0817 01:02:46.722048 3011 log.go:181] (0xc000fbafd0) Reply frame received for 1\nI0817 01:02:46.722096 3011 log.go:181] (0xc000fbafd0) (0xc000b512c0) Create stream\nI0817 01:02:46.722112 3011 log.go:181] (0xc000fbafd0) (0xc000b512c0) Stream added, broadcasting: 3\nI0817 01:02:46.723154 3011 log.go:181] (0xc000fbafd0) Reply frame received for 3\nI0817 01:02:46.723192 3011 log.go:181] (0xc000fbafd0) (0xc000889400) Create stream\nI0817 01:02:46.723206 3011 log.go:181] (0xc000fbafd0) (0xc000889400) Stream added, broadcasting: 5\nI0817 01:02:46.724159 3011 log.go:181] (0xc000fbafd0) Reply frame received for 5\nI0817 01:02:46.798683 3011 log.go:181] (0xc000fbafd0) Data frame received for 5\nI0817 01:02:46.798718 3011 log.go:181] (0xc000889400) (5) Data frame handling\nI0817 01:02:46.798741 3011 log.go:181] (0xc000889400) (5) Data frame sent\nI0817 01:02:46.798752 3011 log.go:181] (0xc000fbafd0) Data frame received for 5\nI0817 01:02:46.798759 3011 log.go:181] (0xc000889400) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30266\nConnection to 172.18.0.11 30266 port [tcp/30266] succeeded!\nI0817 01:02:46.798782 3011 log.go:181] (0xc000889400) (5) Data frame sent\nI0817 01:02:46.799112 3011 log.go:181] (0xc000fbafd0) Data frame received for 3\nI0817 01:02:46.799130 3011 log.go:181] (0xc000b512c0) (3) Data frame handling\nI0817 01:02:46.799212 3011 log.go:181] (0xc000fbafd0) Data frame received for 5\nI0817 01:02:46.799256 3011 log.go:181] (0xc000889400) (5) Data frame handling\nI0817 01:02:46.800611 3011 log.go:181] (0xc000fbafd0) Data frame received for 1\nI0817 01:02:46.800660 3011 log.go:181] (0xc000b69c20) (1) Data frame handling\nI0817 01:02:46.800684 3011 log.go:181] (0xc000b69c20) (1) Data frame sent\nI0817 01:02:46.800854 3011 log.go:181] (0xc000fbafd0) (0xc000b69c20) Stream removed, broadcasting: 1\nI0817 01:02:46.800912 3011 log.go:181] (0xc000fbafd0) Go away received\nI0817 01:02:46.801286 3011 log.go:181] (0xc000fbafd0) (0xc000b69c20) Stream removed, broadcasting: 1\nI0817 01:02:46.801310 3011 log.go:181] (0xc000fbafd0) (0xc000b512c0) Stream removed, broadcasting: 3\nI0817 01:02:46.801323 3011 log.go:181] (0xc000fbafd0) (0xc000889400) Stream removed, broadcasting: 5\n" Aug 17 01:02:46.809: INFO: stdout: "" Aug 17 01:02:46.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30266' Aug 17 01:02:47.019: INFO: stderr: "I0817 01:02:46.952692 3029 log.go:181] (0xc000bd2630) (0xc000a0f720) Create stream\nI0817 01:02:46.952908 3029 log.go:181] (0xc000bd2630) (0xc000a0f720) Stream added, broadcasting: 1\nI0817 01:02:46.957570 3029 log.go:181] (0xc000bd2630) Reply frame received for 1\nI0817 01:02:46.957610 3029 log.go:181] (0xc000bd2630) (0xc0009970e0) Create stream\nI0817 01:02:46.957623 3029 log.go:181] (0xc000bd2630) (0xc0009970e0) Stream added, broadcasting: 3\nI0817 01:02:46.958255 3029 log.go:181] (0xc000bd2630) Reply frame received for 3\nI0817 01:02:46.958279 3029 log.go:181] (0xc000bd2630) (0xc000718aa0) Create stream\nI0817 01:02:46.958287 3029 log.go:181] (0xc000bd2630) (0xc000718aa0) Stream added, broadcasting: 5\nI0817 01:02:46.958911 3029 log.go:181] (0xc000bd2630) Reply frame received for 5\nI0817 01:02:47.010457 3029 log.go:181] (0xc000bd2630) Data frame received for 3\nI0817 01:02:47.010485 3029 log.go:181] (0xc0009970e0) (3) Data frame handling\nI0817 01:02:47.010541 3029 log.go:181] (0xc000bd2630) Data frame received for 5\nI0817 01:02:47.010586 3029 log.go:181] (0xc000718aa0) (5) Data frame handling\nI0817 01:02:47.010637 3029 log.go:181] (0xc000718aa0) (5) Data frame sent\nI0817 01:02:47.010661 3029 log.go:181] (0xc000bd2630) Data frame received for 5\nI0817 01:02:47.010684 3029 log.go:181] (0xc000718aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30266\nConnection to 172.18.0.14 30266 port [tcp/30266] succeeded!\nI0817 01:02:47.012430 3029 log.go:181] (0xc000bd2630) Data frame received for 1\nI0817 01:02:47.012459 3029 log.go:181] (0xc000a0f720) (1) Data frame handling\nI0817 01:02:47.012478 3029 log.go:181] (0xc000a0f720) (1) Data frame sent\nI0817 01:02:47.012493 3029 log.go:181] (0xc000bd2630) (0xc000a0f720) Stream removed, broadcasting: 1\nI0817 01:02:47.012507 3029 log.go:181] (0xc000bd2630) Go away received\nI0817 01:02:47.012819 3029 log.go:181] (0xc000bd2630) (0xc000a0f720) Stream removed, broadcasting: 1\nI0817 01:02:47.012833 3029 log.go:181] (0xc000bd2630) (0xc0009970e0) Stream removed, broadcasting: 3\nI0817 01:02:47.012838 3029 log.go:181] (0xc000bd2630) (0xc000718aa0) Stream removed, broadcasting: 5\n" Aug 17 01:02:47.019: INFO: stdout: "" Aug 17 01:02:47.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30266/ ; done' Aug 17 01:02:47.329: INFO: stderr: "I0817 01:02:47.155024 3048 log.go:181] (0xc000f18d10) (0xc00092e280) Create stream\nI0817 01:02:47.155065 3048 log.go:181] (0xc000f18d10) (0xc00092e280) Stream added, broadcasting: 1\nI0817 01:02:47.160901 3048 log.go:181] (0xc000f18d10) Reply frame received for 1\nI0817 01:02:47.160959 3048 log.go:181] (0xc000f18d10) (0xc000a3f220) Create stream\nI0817 01:02:47.160976 3048 log.go:181] (0xc000f18d10) (0xc000a3f220) Stream added, broadcasting: 3\nI0817 01:02:47.161923 3048 log.go:181] (0xc000f18d10) Reply frame received for 3\nI0817 01:02:47.161954 3048 log.go:181] (0xc000f18d10) (0xc000a3a500) Create stream\nI0817 01:02:47.161964 3048 log.go:181] (0xc000f18d10) (0xc000a3a500) Stream added, broadcasting: 5\nI0817 01:02:47.162941 3048 log.go:181] (0xc000f18d10) Reply frame received for 5\nI0817 01:02:47.230843 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.230873 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.230907 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.230923 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.230967 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.231001 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.237564 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.237583 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.237600 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.238377 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.238397 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.238421 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.238441 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.238453 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.238462 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.243695 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.243716 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.243732 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.244146 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.244159 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.244166 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.244250 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.244281 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.244313 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.253077 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.253096 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.253111 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.254084 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.254106 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.254119 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.254136 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.254145 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.254155 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.258066 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.258088 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.258102 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.258455 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.258476 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.258501 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.258520 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.258537 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.258548 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.261892 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.261910 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.261929 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.262766 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.262803 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.262833 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.262869 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.262885 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.262929 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.268913 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.268942 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.268972 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.269626 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.269667 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.269686 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.269707 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.269727 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\nI0817 01:02:47.269738 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.269746 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.269768 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\nI0817 01:02:47.269777 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.273520 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.273537 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.273553 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.273920 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.273939 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.273964 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.273979 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.273995 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.274004 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.280157 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.280181 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.280193 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.280965 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.280978 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.280986 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.280997 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.281003 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.281010 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.285504 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.285529 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.285604 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.287458 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.287602 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.287768 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.287884 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.288015 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.288130 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.293852 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.293874 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.293892 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.294504 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.294533 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.294544 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.294565 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.294578 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.294599 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.298642 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.298657 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.298668 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.299189 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.299200 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.299210 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.299227 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.299241 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.299250 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.303173 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.303186 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.303197 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.303707 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.303721 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.303739 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.303751 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.303764 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.303780 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.307041 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.307054 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.307074 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.307715 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.307730 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.307737 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.307748 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.307754 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.307763 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.311959 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.311983 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.312004 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.312377 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.312408 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.312423 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.312448 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.312456 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.312467 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.316187 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.316203 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.316219 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.316612 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.316635 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.316648 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.316666 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.316684 3048 log.go:181] (0xc000a3a500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.316820 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.321073 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.321098 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.321118 3048 log.go:181] (0xc000a3f220) (3) Data frame sent\nI0817 01:02:47.321748 3048 log.go:181] (0xc000f18d10) Data frame received for 3\nI0817 01:02:47.321763 3048 log.go:181] (0xc000a3f220) (3) Data frame handling\nI0817 01:02:47.321863 3048 log.go:181] (0xc000f18d10) Data frame received for 5\nI0817 01:02:47.321884 3048 log.go:181] (0xc000a3a500) (5) Data frame handling\nI0817 01:02:47.323386 3048 log.go:181] (0xc000f18d10) Data frame received for 1\nI0817 01:02:47.323399 3048 log.go:181] (0xc00092e280) (1) Data frame handling\nI0817 01:02:47.323408 3048 log.go:181] (0xc00092e280) (1) Data frame sent\nI0817 01:02:47.323439 3048 log.go:181] (0xc000f18d10) (0xc00092e280) Stream removed, broadcasting: 1\nI0817 01:02:47.323470 3048 log.go:181] (0xc000f18d10) Go away received\nI0817 01:02:47.323818 3048 log.go:181] (0xc000f18d10) (0xc00092e280) Stream removed, broadcasting: 1\nI0817 01:02:47.323833 3048 log.go:181] (0xc000f18d10) (0xc000a3f220) Stream removed, broadcasting: 3\nI0817 01:02:47.323840 3048 log.go:181] (0xc000f18d10) (0xc000a3a500) Stream removed, broadcasting: 5\n" Aug 17 01:02:47.330: INFO: stdout: "\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr\naffinity-nodeport-timeout-rrntr" Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Received response from host: affinity-nodeport-timeout-rrntr Aug 17 01:02:47.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30266/' Aug 17 01:02:47.545: INFO: stderr: "I0817 01:02:47.467891 3066 log.go:181] (0xc0005bd550) (0xc000d906e0) Create stream\nI0817 01:02:47.467940 3066 log.go:181] (0xc0005bd550) (0xc000d906e0) Stream added, broadcasting: 1\nI0817 01:02:47.473497 3066 log.go:181] (0xc0005bd550) Reply frame received for 1\nI0817 01:02:47.473532 3066 log.go:181] (0xc0005bd550) (0xc0007b2140) Create stream\nI0817 01:02:47.473544 3066 log.go:181] (0xc0005bd550) (0xc0007b2140) Stream added, broadcasting: 3\nI0817 01:02:47.474481 3066 log.go:181] (0xc0005bd550) Reply frame received for 3\nI0817 01:02:47.474532 3066 log.go:181] (0xc0005bd550) (0xc0004b9ae0) Create stream\nI0817 01:02:47.474552 3066 log.go:181] (0xc0005bd550) (0xc0004b9ae0) Stream added, broadcasting: 5\nI0817 01:02:47.475305 3066 log.go:181] (0xc0005bd550) Reply frame received for 5\nI0817 01:02:47.534396 3066 log.go:181] (0xc0005bd550) Data frame received for 5\nI0817 01:02:47.534446 3066 log.go:181] (0xc0004b9ae0) (5) Data frame handling\nI0817 01:02:47.534489 3066 log.go:181] (0xc0004b9ae0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:02:47.536526 3066 log.go:181] (0xc0005bd550) Data frame received for 3\nI0817 01:02:47.536551 3066 log.go:181] (0xc0007b2140) (3) Data frame handling\nI0817 01:02:47.536567 3066 log.go:181] (0xc0007b2140) (3) Data frame sent\nI0817 01:02:47.537343 3066 log.go:181] (0xc0005bd550) Data frame received for 3\nI0817 01:02:47.537366 3066 log.go:181] (0xc0007b2140) (3) Data frame handling\nI0817 01:02:47.537385 3066 log.go:181] (0xc0005bd550) Data frame received for 5\nI0817 01:02:47.537391 3066 log.go:181] (0xc0004b9ae0) (5) Data frame handling\nI0817 01:02:47.538685 3066 log.go:181] (0xc0005bd550) Data frame received for 1\nI0817 01:02:47.538698 3066 log.go:181] (0xc000d906e0) (1) Data frame handling\nI0817 01:02:47.538716 3066 log.go:181] (0xc000d906e0) (1) Data frame sent\nI0817 01:02:47.538871 3066 log.go:181] (0xc0005bd550) (0xc000d906e0) Stream removed, broadcasting: 1\nI0817 01:02:47.538967 3066 log.go:181] (0xc0005bd550) Go away received\nI0817 01:02:47.539205 3066 log.go:181] (0xc0005bd550) (0xc000d906e0) Stream removed, broadcasting: 1\nI0817 01:02:47.539226 3066 log.go:181] (0xc0005bd550) (0xc0007b2140) Stream removed, broadcasting: 3\nI0817 01:02:47.539236 3066 log.go:181] (0xc0005bd550) (0xc0004b9ae0) Stream removed, broadcasting: 5\n" Aug 17 01:02:47.545: INFO: stdout: "affinity-nodeport-timeout-rrntr" Aug 17 01:03:02.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30266/' Aug 17 01:03:02.767: INFO: stderr: "I0817 01:03:02.679760 3085 log.go:181] (0xc000c9d970) (0xc000ba5360) Create stream\nI0817 01:03:02.679826 3085 log.go:181] (0xc000c9d970) (0xc000ba5360) Stream added, broadcasting: 1\nI0817 01:03:02.682516 3085 log.go:181] (0xc000c9d970) Reply frame received for 1\nI0817 01:03:02.682576 3085 log.go:181] (0xc000c9d970) (0xc000ba5400) Create stream\nI0817 01:03:02.682602 3085 log.go:181] (0xc000c9d970) (0xc000ba5400) Stream added, broadcasting: 3\nI0817 01:03:02.683588 3085 log.go:181] (0xc000c9d970) Reply frame received for 3\nI0817 01:03:02.683623 3085 log.go:181] (0xc000c9d970) (0xc000726b40) Create stream\nI0817 01:03:02.683630 3085 log.go:181] (0xc000c9d970) (0xc000726b40) Stream added, broadcasting: 5\nI0817 01:03:02.684598 3085 log.go:181] (0xc000c9d970) Reply frame received for 5\nI0817 01:03:02.753706 3085 log.go:181] (0xc000c9d970) Data frame received for 5\nI0817 01:03:02.753747 3085 log.go:181] (0xc000726b40) (5) Data frame handling\nI0817 01:03:02.753770 3085 log.go:181] (0xc000726b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:03:02.758761 3085 log.go:181] (0xc000c9d970) Data frame received for 3\nI0817 01:03:02.758782 3085 log.go:181] (0xc000ba5400) (3) Data frame handling\nI0817 01:03:02.758807 3085 log.go:181] (0xc000ba5400) (3) Data frame sent\nI0817 01:03:02.759797 3085 log.go:181] (0xc000c9d970) Data frame received for 5\nI0817 01:03:02.759811 3085 log.go:181] (0xc000726b40) (5) Data frame handling\nI0817 01:03:02.759844 3085 log.go:181] (0xc000c9d970) Data frame received for 3\nI0817 01:03:02.759865 3085 log.go:181] (0xc000ba5400) (3) Data frame handling\nI0817 01:03:02.761255 3085 log.go:181] (0xc000c9d970) Data frame received for 1\nI0817 01:03:02.761271 3085 log.go:181] (0xc000ba5360) (1) Data frame handling\nI0817 01:03:02.761289 3085 log.go:181] (0xc000ba5360) (1) Data frame sent\nI0817 01:03:02.761301 3085 log.go:181] (0xc000c9d970) (0xc000ba5360) Stream removed, broadcasting: 1\nI0817 01:03:02.761427 3085 log.go:181] (0xc000c9d970) Go away received\nI0817 01:03:02.761677 3085 log.go:181] (0xc000c9d970) (0xc000ba5360) Stream removed, broadcasting: 1\nI0817 01:03:02.761697 3085 log.go:181] (0xc000c9d970) (0xc000ba5400) Stream removed, broadcasting: 3\nI0817 01:03:02.761711 3085 log.go:181] (0xc000c9d970) (0xc000726b40) Stream removed, broadcasting: 5\n" Aug 17 01:03:02.768: INFO: stdout: "affinity-nodeport-timeout-rrntr" Aug 17 01:03:17.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1234 execpod-affinitysp6bq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30266/' Aug 17 01:03:17.969: INFO: stderr: "I0817 01:03:17.891849 3103 log.go:181] (0xc000e9b340) (0xc000bbd360) Create stream\nI0817 01:03:17.891924 3103 log.go:181] (0xc000e9b340) (0xc000bbd360) Stream added, broadcasting: 1\nI0817 01:03:17.897373 3103 log.go:181] (0xc000e9b340) Reply frame received for 1\nI0817 01:03:17.897407 3103 log.go:181] (0xc000e9b340) (0xc000ba2dc0) Create stream\nI0817 01:03:17.897416 3103 log.go:181] (0xc000e9b340) (0xc000ba2dc0) Stream added, broadcasting: 3\nI0817 01:03:17.898434 3103 log.go:181] (0xc000e9b340) Reply frame received for 3\nI0817 01:03:17.898476 3103 log.go:181] (0xc000e9b340) (0xc000ba3360) Create stream\nI0817 01:03:17.898490 3103 log.go:181] (0xc000e9b340) (0xc000ba3360) Stream added, broadcasting: 5\nI0817 01:03:17.899382 3103 log.go:181] (0xc000e9b340) Reply frame received for 5\nI0817 01:03:17.958435 3103 log.go:181] (0xc000e9b340) Data frame received for 5\nI0817 01:03:17.958468 3103 log.go:181] (0xc000ba3360) (5) Data frame handling\nI0817 01:03:17.958489 3103 log.go:181] (0xc000ba3360) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30266/\nI0817 01:03:17.961978 3103 log.go:181] (0xc000e9b340) Data frame received for 3\nI0817 01:03:17.962008 3103 log.go:181] (0xc000ba2dc0) (3) Data frame handling\nI0817 01:03:17.962036 3103 log.go:181] (0xc000ba2dc0) (3) Data frame sent\nI0817 01:03:17.962617 3103 log.go:181] (0xc000e9b340) Data frame received for 5\nI0817 01:03:17.962657 3103 log.go:181] (0xc000ba3360) (5) Data frame handling\nI0817 01:03:17.962689 3103 log.go:181] (0xc000e9b340) Data frame received for 3\nI0817 01:03:17.962711 3103 log.go:181] (0xc000ba2dc0) (3) Data frame handling\nI0817 01:03:17.964028 3103 log.go:181] (0xc000e9b340) Data frame received for 1\nI0817 01:03:17.964092 3103 log.go:181] (0xc000bbd360) (1) Data frame handling\nI0817 01:03:17.964111 3103 log.go:181] (0xc000bbd360) (1) Data frame sent\nI0817 01:03:17.964120 3103 log.go:181] (0xc000e9b340) (0xc000bbd360) Stream removed, broadcasting: 1\nI0817 01:03:17.964131 3103 log.go:181] (0xc000e9b340) Go away received\nI0817 01:03:17.964483 3103 log.go:181] (0xc000e9b340) (0xc000bbd360) Stream removed, broadcasting: 1\nI0817 01:03:17.964501 3103 log.go:181] (0xc000e9b340) (0xc000ba2dc0) Stream removed, broadcasting: 3\nI0817 01:03:17.964512 3103 log.go:181] (0xc000e9b340) (0xc000ba3360) Stream removed, broadcasting: 5\n" Aug 17 01:03:17.969: INFO: stdout: "affinity-nodeport-timeout-gh2fc" Aug 17 01:03:17.969: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1234, will wait for the garbage collector to delete the pods Aug 17 01:03:18.384: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 235.537586ms Aug 17 01:03:18.985: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.270205ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:03:40.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1234" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:88.592 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":249,"skipped":4058,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:03:40.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:03:40.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3485" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":294,"completed":250,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:03:40.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:03:41.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d" in namespace "downward-api-2468" to be "Succeeded or Failed" Aug 17 01:03:41.516: INFO: Pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 245.645172ms Aug 17 01:03:43.532: INFO: Pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261818095s Aug 17 01:03:45.537: INFO: Pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266616003s Aug 17 01:03:47.550: INFO: Pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280036885s STEP: Saw pod success Aug 17 01:03:47.550: INFO: Pod "downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d" satisfied condition "Succeeded or Failed" Aug 17 01:03:47.553: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d container client-container: STEP: delete the pod Aug 17 01:03:47.649: INFO: Waiting for pod downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d to disappear Aug 17 01:03:47.665: INFO: Pod downwardapi-volume-cdc5f9c6-5748-469a-8b47-58cf29e6b48d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:03:47.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2468" for this suite. • [SLOW TEST:7.102 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":251,"skipped":4124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:03:47.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 17 01:03:48.182: INFO: Waiting up to 5m0s for pod "var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7" in namespace "var-expansion-9276" to be "Succeeded or Failed" Aug 17 01:03:48.226: INFO: Pod "var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.337769ms Aug 17 01:03:50.230: INFO: Pod "var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048034751s Aug 17 01:03:52.235: INFO: Pod "var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052912359s STEP: Saw pod success Aug 17 01:03:52.235: INFO: Pod "var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7" satisfied condition "Succeeded or Failed" Aug 17 01:03:52.238: INFO: Trying to get logs from node latest-worker2 pod var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7 container dapi-container: STEP: delete the pod Aug 17 01:03:52.480: INFO: Waiting for pod var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7 to disappear Aug 17 01:03:52.502: INFO: Pod var-expansion-c5d81006-9182-4f7f-82fb-ba2c6fbc17c7 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:03:52.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9276" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":294,"completed":252,"skipped":4157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:03:52.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3688.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3688.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3688.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3688.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.244_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3688.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3688.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3688.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3688.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3688.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3688.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.244_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 01:04:04.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.814: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.817: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.837: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.839: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.842: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.845: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:04.868: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:09.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.876: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.879: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.902: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.908: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.911: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:09.924: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:14.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.877: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.880: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.897: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.899: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.901: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.904: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:14.921: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:19.872: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.877: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.880: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.895: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.897: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.900: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.902: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:19.919: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:24.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.878: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.881: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.902: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.907: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.910: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:24.934: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:29.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.953: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.956: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.972: INFO: Unable to read jessie_udp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.974: INFO: Unable to read jessie_tcp@dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.977: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.979: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local from pod dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53: the server could not find the requested resource (get pods dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53) Aug 17 01:04:29.995: INFO: Lookups using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 failed for: [wheezy_udp@dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@dns-test-service.dns-3688.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_udp@dns-test-service.dns-3688.svc.cluster.local jessie_tcp@dns-test-service.dns-3688.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3688.svc.cluster.local] Aug 17 01:04:34.937: INFO: DNS probes using dns-3688/dns-test-4e607f50-02b4-416e-b3f3-03af82a7fe53 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:04:35.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3688" for this suite. • [SLOW TEST:43.422 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":294,"completed":253,"skipped":4191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:04:35.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-3c85950b-8a62-4d17-ad62-a2f900ea5555 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:04:42.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2330" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":254,"skipped":4222,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:04:42.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:04:48.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9368" for this suite. • [SLOW TEST:6.548 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":294,"completed":255,"skipped":4237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:04:48.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 17 01:04:48.997: INFO: Waiting up to 5m0s for pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25" in namespace "emptydir-9720" to be "Succeeded or Failed" Aug 17 01:04:49.012: INFO: Pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.906137ms Aug 17 01:04:51.016: INFO: Pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019222276s Aug 17 01:04:53.036: INFO: Pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039220282s Aug 17 01:04:55.126: INFO: Pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129403134s STEP: Saw pod success Aug 17 01:04:55.126: INFO: Pod "pod-872844f7-5bc6-4705-a577-cd128a0dad25" satisfied condition "Succeeded or Failed" Aug 17 01:04:55.138: INFO: Trying to get logs from node latest-worker pod pod-872844f7-5bc6-4705-a577-cd128a0dad25 container test-container: STEP: delete the pod Aug 17 01:04:56.546: INFO: Waiting for pod pod-872844f7-5bc6-4705-a577-cd128a0dad25 to disappear Aug 17 01:04:56.582: INFO: Pod pod-872844f7-5bc6-4705-a577-cd128a0dad25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:04:56.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9720" for this suite. • [SLOW TEST:7.919 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":256,"skipped":4264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:04:56.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 01:05:12.507: INFO: DNS probes using dns-test-a39c267c-a519-4946-99ab-1e86a61a21a7 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 01:05:20.620: INFO: File wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:20.624: INFO: File jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:20.624: INFO: Lookups using dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 failed for: [wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local] Aug 17 01:05:25.656: INFO: File wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:25.660: INFO: File jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:25.660: INFO: Lookups using dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 failed for: [wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local] Aug 17 01:05:30.628: INFO: File wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:30.632: INFO: File jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:30.632: INFO: Lookups using dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 failed for: [wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local] Aug 17 01:05:35.628: INFO: File wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:35.632: INFO: File jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local from pod dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 01:05:35.632: INFO: Lookups using dns-2415/dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 failed for: [wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local] Aug 17 01:05:40.698: INFO: DNS probes using dns-test-6f8bb4c8-e614-44b3-b610-bde93d2ad9a6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2415.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2415.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 01:05:49.692: INFO: DNS probes using dns-test-91151d5e-93c5-4828-999a-360dc6029204 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:05:50.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2415" for this suite. • [SLOW TEST:53.903 seconds] [sig-network] DNS /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":294,"completed":257,"skipped":4303,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:05:50.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2001 STEP: creating service affinity-nodeport-transition in namespace services-2001 STEP: creating replication controller affinity-nodeport-transition in namespace services-2001 I0817 01:05:50.631944 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2001, replica count: 3 I0817 01:05:53.682334 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:05:56.682599 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:05:59.682763 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 01:05:59.689: INFO: Creating new exec pod Aug 17 01:06:09.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 17 01:06:09.657: INFO: stderr: "I0817 01:06:09.579278 3121 log.go:181] (0xc000ca5130) (0xc000598fa0) Create stream\nI0817 01:06:09.579329 3121 log.go:181] (0xc000ca5130) (0xc000598fa0) Stream added, broadcasting: 1\nI0817 01:06:09.581257 3121 log.go:181] (0xc000ca5130) Reply frame received for 1\nI0817 01:06:09.581301 3121 log.go:181] (0xc000ca5130) (0xc0003c2640) Create stream\nI0817 01:06:09.581317 3121 log.go:181] (0xc000ca5130) (0xc0003c2640) Stream added, broadcasting: 3\nI0817 01:06:09.582056 3121 log.go:181] (0xc000ca5130) Reply frame received for 3\nI0817 01:06:09.582082 3121 log.go:181] (0xc000ca5130) (0xc0003c2f00) Create stream\nI0817 01:06:09.582088 3121 log.go:181] (0xc000ca5130) (0xc0003c2f00) Stream added, broadcasting: 5\nI0817 01:06:09.582764 3121 log.go:181] (0xc000ca5130) Reply frame received for 5\nI0817 01:06:09.646030 3121 log.go:181] (0xc000ca5130) Data frame received for 5\nI0817 01:06:09.646178 3121 log.go:181] (0xc0003c2f00) (5) Data frame handling\nI0817 01:06:09.646266 3121 log.go:181] (0xc0003c2f00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0817 01:06:09.649388 3121 log.go:181] (0xc000ca5130) Data frame received for 5\nI0817 01:06:09.649408 3121 log.go:181] (0xc0003c2f00) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0817 01:06:09.649437 3121 log.go:181] (0xc000ca5130) Data frame received for 3\nI0817 01:06:09.649461 3121 log.go:181] (0xc0003c2640) (3) Data frame handling\nI0817 01:06:09.649488 3121 log.go:181] (0xc0003c2f00) (5) Data frame sent\nI0817 01:06:09.649501 3121 log.go:181] (0xc000ca5130) Data frame received for 5\nI0817 01:06:09.649506 3121 log.go:181] (0xc0003c2f00) (5) Data frame handling\nI0817 01:06:09.650708 3121 log.go:181] (0xc000ca5130) Data frame received for 1\nI0817 01:06:09.650737 3121 log.go:181] (0xc000598fa0) (1) Data frame handling\nI0817 01:06:09.650763 3121 log.go:181] (0xc000598fa0) (1) Data frame sent\nI0817 01:06:09.650782 3121 log.go:181] (0xc000ca5130) (0xc000598fa0) Stream removed, broadcasting: 1\nI0817 01:06:09.650810 3121 log.go:181] (0xc000ca5130) Go away received\nI0817 01:06:09.651219 3121 log.go:181] (0xc000ca5130) (0xc000598fa0) Stream removed, broadcasting: 1\nI0817 01:06:09.651236 3121 log.go:181] (0xc000ca5130) (0xc0003c2640) Stream removed, broadcasting: 3\nI0817 01:06:09.651242 3121 log.go:181] (0xc000ca5130) (0xc0003c2f00) Stream removed, broadcasting: 5\n" Aug 17 01:06:09.657: INFO: stdout: "" Aug 17 01:06:09.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c nc -zv -t -w 2 10.103.68.143 80' Aug 17 01:06:09.887: INFO: stderr: "I0817 01:06:09.789889 3136 log.go:181] (0xc0006cf760) (0xc000a9f9a0) Create stream\nI0817 01:06:09.789938 3136 log.go:181] (0xc0006cf760) (0xc000a9f9a0) Stream added, broadcasting: 1\nI0817 01:06:09.795082 3136 log.go:181] (0xc0006cf760) Reply frame received for 1\nI0817 01:06:09.795115 3136 log.go:181] (0xc0006cf760) (0xc000638be0) Create stream\nI0817 01:06:09.795124 3136 log.go:181] (0xc0006cf760) (0xc000638be0) Stream added, broadcasting: 3\nI0817 01:06:09.796025 3136 log.go:181] (0xc0006cf760) Reply frame received for 3\nI0817 01:06:09.796067 3136 log.go:181] (0xc0006cf760) (0xc0004ab5e0) Create stream\nI0817 01:06:09.796081 3136 log.go:181] (0xc0006cf760) (0xc0004ab5e0) Stream added, broadcasting: 5\nI0817 01:06:09.797013 3136 log.go:181] (0xc0006cf760) Reply frame received for 5\nI0817 01:06:09.879607 3136 log.go:181] (0xc0006cf760) Data frame received for 5\nI0817 01:06:09.879636 3136 log.go:181] (0xc0004ab5e0) (5) Data frame handling\nI0817 01:06:09.879655 3136 log.go:181] (0xc0004ab5e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.68.143 80\nConnection to 10.103.68.143 80 port [tcp/http] succeeded!\nI0817 01:06:09.879758 3136 log.go:181] (0xc0006cf760) Data frame received for 5\nI0817 01:06:09.879783 3136 log.go:181] (0xc0004ab5e0) (5) Data frame handling\nI0817 01:06:09.879990 3136 log.go:181] (0xc0006cf760) Data frame received for 3\nI0817 01:06:09.880019 3136 log.go:181] (0xc000638be0) (3) Data frame handling\nI0817 01:06:09.881537 3136 log.go:181] (0xc0006cf760) Data frame received for 1\nI0817 01:06:09.881555 3136 log.go:181] (0xc000a9f9a0) (1) Data frame handling\nI0817 01:06:09.881565 3136 log.go:181] (0xc000a9f9a0) (1) Data frame sent\nI0817 01:06:09.881579 3136 log.go:181] (0xc0006cf760) (0xc000a9f9a0) Stream removed, broadcasting: 1\nI0817 01:06:09.881593 3136 log.go:181] (0xc0006cf760) Go away received\nI0817 01:06:09.881960 3136 log.go:181] (0xc0006cf760) (0xc000a9f9a0) Stream removed, broadcasting: 1\nI0817 01:06:09.881982 3136 log.go:181] (0xc0006cf760) (0xc000638be0) Stream removed, broadcasting: 3\nI0817 01:06:09.881993 3136 log.go:181] (0xc0006cf760) (0xc0004ab5e0) Stream removed, broadcasting: 5\n" Aug 17 01:06:09.887: INFO: stdout: "" Aug 17 01:06:09.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31344' Aug 17 01:06:10.097: INFO: stderr: "I0817 01:06:10.014481 3154 log.go:181] (0xc000d66fd0) (0xc000be57c0) Create stream\nI0817 01:06:10.014548 3154 log.go:181] (0xc000d66fd0) (0xc000be57c0) Stream added, broadcasting: 1\nI0817 01:06:10.018720 3154 log.go:181] (0xc000d66fd0) Reply frame received for 1\nI0817 01:06:10.018768 3154 log.go:181] (0xc000d66fd0) (0xc000a27220) Create stream\nI0817 01:06:10.018785 3154 log.go:181] (0xc000d66fd0) (0xc000a27220) Stream added, broadcasting: 3\nI0817 01:06:10.019528 3154 log.go:181] (0xc000d66fd0) Reply frame received for 3\nI0817 01:06:10.019554 3154 log.go:181] (0xc000d66fd0) (0xc0009c0500) Create stream\nI0817 01:06:10.019561 3154 log.go:181] (0xc000d66fd0) (0xc0009c0500) Stream added, broadcasting: 5\nI0817 01:06:10.020230 3154 log.go:181] (0xc000d66fd0) Reply frame received for 5\nI0817 01:06:10.090485 3154 log.go:181] (0xc000d66fd0) Data frame received for 5\nI0817 01:06:10.090518 3154 log.go:181] (0xc0009c0500) (5) Data frame handling\nI0817 01:06:10.090539 3154 log.go:181] (0xc0009c0500) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 31344\nConnection to 172.18.0.11 31344 port [tcp/31344] succeeded!\nI0817 01:06:10.091091 3154 log.go:181] (0xc000d66fd0) Data frame received for 3\nI0817 01:06:10.091116 3154 log.go:181] (0xc000a27220) (3) Data frame handling\nI0817 01:06:10.091133 3154 log.go:181] (0xc000d66fd0) Data frame received for 5\nI0817 01:06:10.091145 3154 log.go:181] (0xc0009c0500) (5) Data frame handling\nI0817 01:06:10.092545 3154 log.go:181] (0xc000d66fd0) Data frame received for 1\nI0817 01:06:10.092566 3154 log.go:181] (0xc000be57c0) (1) Data frame handling\nI0817 01:06:10.092580 3154 log.go:181] (0xc000be57c0) (1) Data frame sent\nI0817 01:06:10.092714 3154 log.go:181] (0xc000d66fd0) (0xc000be57c0) Stream removed, broadcasting: 1\nI0817 01:06:10.093009 3154 log.go:181] (0xc000d66fd0) Go away received\nI0817 01:06:10.093334 3154 log.go:181] (0xc000d66fd0) (0xc000be57c0) Stream removed, broadcasting: 1\nI0817 01:06:10.093355 3154 log.go:181] (0xc000d66fd0) (0xc000a27220) Stream removed, broadcasting: 3\nI0817 01:06:10.093372 3154 log.go:181] (0xc000d66fd0) (0xc0009c0500) Stream removed, broadcasting: 5\n" Aug 17 01:06:10.098: INFO: stdout: "" Aug 17 01:06:10.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31344' Aug 17 01:06:10.284: INFO: stderr: "I0817 01:06:10.217889 3172 log.go:181] (0xc000576fd0) (0xc000bf9720) Create stream\nI0817 01:06:10.217936 3172 log.go:181] (0xc000576fd0) (0xc000bf9720) Stream added, broadcasting: 1\nI0817 01:06:10.222140 3172 log.go:181] (0xc000576fd0) Reply frame received for 1\nI0817 01:06:10.222169 3172 log.go:181] (0xc000576fd0) (0xc000a4a3c0) Create stream\nI0817 01:06:10.222181 3172 log.go:181] (0xc000576fd0) (0xc000a4a3c0) Stream added, broadcasting: 3\nI0817 01:06:10.222719 3172 log.go:181] (0xc000576fd0) Reply frame received for 3\nI0817 01:06:10.222749 3172 log.go:181] (0xc000576fd0) (0xc0006ab220) Create stream\nI0817 01:06:10.222758 3172 log.go:181] (0xc000576fd0) (0xc0006ab220) Stream added, broadcasting: 5\nI0817 01:06:10.223184 3172 log.go:181] (0xc000576fd0) Reply frame received for 5\nI0817 01:06:10.277724 3172 log.go:181] (0xc000576fd0) Data frame received for 3\nI0817 01:06:10.277754 3172 log.go:181] (0xc000a4a3c0) (3) Data frame handling\nI0817 01:06:10.277771 3172 log.go:181] (0xc000576fd0) Data frame received for 5\nI0817 01:06:10.277785 3172 log.go:181] (0xc0006ab220) (5) Data frame handling\nI0817 01:06:10.277793 3172 log.go:181] (0xc0006ab220) (5) Data frame sent\nI0817 01:06:10.277799 3172 log.go:181] (0xc000576fd0) Data frame received for 5\nI0817 01:06:10.277805 3172 log.go:181] (0xc0006ab220) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31344\nConnection to 172.18.0.14 31344 port [tcp/31344] succeeded!\nI0817 01:06:10.278488 3172 log.go:181] (0xc000576fd0) Data frame received for 1\nI0817 01:06:10.278495 3172 log.go:181] (0xc000bf9720) (1) Data frame handling\nI0817 01:06:10.278500 3172 log.go:181] (0xc000bf9720) (1) Data frame sent\nI0817 01:06:10.278508 3172 log.go:181] (0xc000576fd0) (0xc000bf9720) Stream removed, broadcasting: 1\nI0817 01:06:10.278615 3172 log.go:181] (0xc000576fd0) Go away received\nI0817 01:06:10.278742 3172 log.go:181] (0xc000576fd0) (0xc000bf9720) Stream removed, broadcasting: 1\nI0817 01:06:10.278751 3172 log.go:181] (0xc000576fd0) (0xc000a4a3c0) Stream removed, broadcasting: 3\nI0817 01:06:10.278756 3172 log.go:181] (0xc000576fd0) (0xc0006ab220) Stream removed, broadcasting: 5\n" Aug 17 01:06:10.284: INFO: stdout: "" Aug 17 01:06:10.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31344/ ; done' Aug 17 01:06:10.574: INFO: stderr: "I0817 01:06:10.412914 3190 log.go:181] (0xc000fa7a20) (0xc0009252c0) Create stream\nI0817 01:06:10.412972 3190 log.go:181] (0xc000fa7a20) (0xc0009252c0) Stream added, broadcasting: 1\nI0817 01:06:10.416812 3190 log.go:181] (0xc000fa7a20) Reply frame received for 1\nI0817 01:06:10.416856 3190 log.go:181] (0xc000fa7a20) (0xc000311720) Create stream\nI0817 01:06:10.416871 3190 log.go:181] (0xc000fa7a20) (0xc000311720) Stream added, broadcasting: 3\nI0817 01:06:10.419312 3190 log.go:181] (0xc000fa7a20) Reply frame received for 3\nI0817 01:06:10.419352 3190 log.go:181] (0xc000fa7a20) (0xc000930500) Create stream\nI0817 01:06:10.419367 3190 log.go:181] (0xc000fa7a20) (0xc000930500) Stream added, broadcasting: 5\nI0817 01:06:10.420000 3190 log.go:181] (0xc000fa7a20) Reply frame received for 5\nI0817 01:06:10.469588 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.469627 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.469640 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.469657 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.469666 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.469674 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.471988 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.472078 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.472106 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.472427 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.472449 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.472461 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.472468 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.472478 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.472487 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.472495 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.472500 3190 log.go:181] (0xc000930500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0817 01:06:10.472512 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.472520 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.472544 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.472564 3190 log.go:181] (0xc000930500) (5) Data frame sent\n http://172.18.0.11:31344/\nI0817 01:06:10.475767 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.475788 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.475812 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.476442 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.476457 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.476465 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.476487 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.476495 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.476501 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.481247 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.481263 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.481274 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.481900 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.481921 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.481932 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.481947 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.481958 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.481989 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.486136 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.486150 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.486158 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.486774 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.486799 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.486809 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.486819 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.486824 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.486830 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.486839 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.486844 3190 log.go:181] (0xc000930500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.486856 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.491768 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.491787 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.491805 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.492085 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.492102 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.492112 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.492171 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.492191 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.492206 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.496264 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.496284 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.496299 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.496576 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.496588 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.496600 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.496616 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.496622 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.496628 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.496633 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.496637 3190 log.go:181] (0xc000930500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.496648 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.503599 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.503620 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.503636 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.504261 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.504278 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.504290 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.504328 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.504360 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.504384 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.508935 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.508954 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.508967 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.509464 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.509477 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.509486 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.509496 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.509501 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.509508 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.513289 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.513302 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.513311 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.513583 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.513603 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.513620 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/I0817 01:06:10.513628 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.513637 3190 log.go:181] (0xc000930500) (5) Data frame handling\n\nI0817 01:06:10.513652 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.513668 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.513689 3190 log.go:181] (0xc000930500) (5) Data frame sent\nI0817 01:06:10.513709 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.518712 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.518735 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.518759 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.519315 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.519346 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.519367 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.519384 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.519399 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.519426 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.524398 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.524421 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.524444 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.524918 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.524947 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.524968 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.524980 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.524997 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.525012 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.531788 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.531814 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.531835 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.532504 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.532524 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.532533 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.532546 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.532555 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.532578 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.538727 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.538749 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.538760 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.539192 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.539218 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.539227 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.539241 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.539250 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.539258 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.546466 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.546490 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.546510 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.547292 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.547307 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.547336 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.547369 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.547389 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.547405 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.552349 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.552364 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.552374 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.553286 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.553296 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.553305 3190 log.go:181] (0xc000930500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.553399 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.553421 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.553436 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.560972 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.560986 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.560991 3190 log.go:181] (0xc000311720) (3) Data frame sent\nI0817 01:06:10.561849 3190 log.go:181] (0xc000fa7a20) Data frame received for 3\nI0817 01:06:10.561893 3190 log.go:181] (0xc000311720) (3) Data frame handling\nI0817 01:06:10.561932 3190 log.go:181] (0xc000fa7a20) Data frame received for 5\nI0817 01:06:10.561975 3190 log.go:181] (0xc000930500) (5) Data frame handling\nI0817 01:06:10.563867 3190 log.go:181] (0xc000fa7a20) Data frame received for 1\nI0817 01:06:10.563894 3190 log.go:181] (0xc0009252c0) (1) Data frame handling\nI0817 01:06:10.563930 3190 log.go:181] (0xc0009252c0) (1) Data frame sent\nI0817 01:06:10.563950 3190 log.go:181] (0xc000fa7a20) (0xc0009252c0) Stream removed, broadcasting: 1\nI0817 01:06:10.563971 3190 log.go:181] (0xc000fa7a20) Go away received\nI0817 01:06:10.564433 3190 log.go:181] (0xc000fa7a20) (0xc0009252c0) Stream removed, broadcasting: 1\nI0817 01:06:10.564455 3190 log.go:181] (0xc000fa7a20) (0xc000311720) Stream removed, broadcasting: 3\nI0817 01:06:10.564466 3190 log.go:181] (0xc000fa7a20) (0xc000930500) Stream removed, broadcasting: 5\n" Aug 17 01:06:10.575: INFO: stdout: "\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-s2mtk\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-s2mtk\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-s2mtk\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-s2mtk\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-wc9vw\naffinity-nodeport-transition-wc9vw\naffinity-nodeport-transition-s2mtk" Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-s2mtk Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-s2mtk Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-s2mtk Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-s2mtk Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-wc9vw Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-wc9vw Aug 17 01:06:10.575: INFO: Received response from host: affinity-nodeport-transition-s2mtk Aug 17 01:06:10.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod-affinity5nfsm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31344/ ; done' Aug 17 01:06:10.947: INFO: stderr: "I0817 01:06:10.758763 3208 log.go:181] (0xc000f30fd0) (0xc0008f9680) Create stream\nI0817 01:06:10.758819 3208 log.go:181] (0xc000f30fd0) (0xc0008f9680) Stream added, broadcasting: 1\nI0817 01:06:10.763192 3208 log.go:181] (0xc000f30fd0) Reply frame received for 1\nI0817 01:06:10.763232 3208 log.go:181] (0xc000f30fd0) (0xc00089a500) Create stream\nI0817 01:06:10.763242 3208 log.go:181] (0xc000f30fd0) (0xc00089a500) Stream added, broadcasting: 3\nI0817 01:06:10.764022 3208 log.go:181] (0xc000f30fd0) Reply frame received for 3\nI0817 01:06:10.764060 3208 log.go:181] (0xc000f30fd0) (0xc00089ae60) Create stream\nI0817 01:06:10.764071 3208 log.go:181] (0xc000f30fd0) (0xc00089ae60) Stream added, broadcasting: 5\nI0817 01:06:10.765002 3208 log.go:181] (0xc000f30fd0) Reply frame received for 5\nI0817 01:06:10.838414 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.838470 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.838503 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.838539 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.838583 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.838601 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.844059 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.844099 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.844136 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.844667 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.844694 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.844714 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.844856 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.844881 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.844898 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.851432 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.851470 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.851498 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.852212 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.852242 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.852259 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.852328 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.852348 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.852364 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.859265 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.859307 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.859327 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.859664 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.859707 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.859731 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.859759 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.859779 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.859806 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\nI0817 01:06:10.859827 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.859844 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.859894 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\nI0817 01:06:10.865438 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.865475 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.865500 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.866115 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.866137 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.866150 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.866174 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.866185 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.866204 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.871067 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.871090 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.871106 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.871792 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.871807 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.871819 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.871842 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.871861 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.871886 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.876553 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.876579 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.876596 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.877499 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.877537 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.877552 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.877759 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.877778 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.877793 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.881520 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.881532 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.881541 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.882398 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.882417 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.882438 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.882538 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.882548 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.882556 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.887838 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.887852 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.887858 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.888896 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.888914 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.888920 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.888930 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.888934 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.888939 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.893958 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.893977 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.893994 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.894636 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.894658 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.894671 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.894691 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.894701 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.894712 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.899523 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.899542 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.899553 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.900301 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.900324 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.900341 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.900362 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.900374 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.900391 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.907420 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.907452 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.907478 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.908127 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.908149 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.908160 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.908179 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.908205 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.908220 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.915623 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.915648 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.915666 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.916447 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.916475 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.916489 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.916509 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.916518 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.916532 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\nI0817 01:06:10.920053 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.920073 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.920091 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.920501 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.920514 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.920519 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.920527 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.920532 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.920536 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.924984 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.925006 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.925021 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.925414 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.925423 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.925429 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.925445 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.925460 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.925476 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.931327 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.931347 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.931364 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.931852 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.931874 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.931905 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.931925 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.931941 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.931951 3208 log.go:181] (0xc00089ae60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31344/\nI0817 01:06:10.936619 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.936636 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.936646 3208 log.go:181] (0xc00089a500) (3) Data frame sent\nI0817 01:06:10.937514 3208 log.go:181] (0xc000f30fd0) Data frame received for 5\nI0817 01:06:10.937543 3208 log.go:181] (0xc00089ae60) (5) Data frame handling\nI0817 01:06:10.937568 3208 log.go:181] (0xc000f30fd0) Data frame received for 3\nI0817 01:06:10.937578 3208 log.go:181] (0xc00089a500) (3) Data frame handling\nI0817 01:06:10.938999 3208 log.go:181] (0xc000f30fd0) Data frame received for 1\nI0817 01:06:10.939027 3208 log.go:181] (0xc0008f9680) (1) Data frame handling\nI0817 01:06:10.939049 3208 log.go:181] (0xc0008f9680) (1) Data frame sent\nI0817 01:06:10.939077 3208 log.go:181] (0xc000f30fd0) (0xc0008f9680) Stream removed, broadcasting: 1\nI0817 01:06:10.939105 3208 log.go:181] (0xc000f30fd0) Go away received\nI0817 01:06:10.939553 3208 log.go:181] (0xc000f30fd0) (0xc0008f9680) Stream removed, broadcasting: 1\nI0817 01:06:10.939578 3208 log.go:181] (0xc000f30fd0) (0xc00089a500) Stream removed, broadcasting: 3\nI0817 01:06:10.939609 3208 log.go:181] (0xc000f30fd0) (0xc00089ae60) Stream removed, broadcasting: 5\n" Aug 17 01:06:10.947: INFO: stdout: "\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck\naffinity-nodeport-transition-xr8ck" Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Received response from host: affinity-nodeport-transition-xr8ck Aug 17 01:06:10.947: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2001, will wait for the garbage collector to delete the pods Aug 17 01:06:11.059: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.507862ms Aug 17 01:06:11.659: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.244022ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:16.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2001" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:26.124 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":258,"skipped":4306,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:16.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:06:16.722: INFO: Waiting up to 5m0s for pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237" in namespace "projected-2258" to be "Succeeded or Failed" Aug 17 01:06:16.741: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237": Phase="Pending", Reason="", readiness=false. Elapsed: 18.322258ms Aug 17 01:06:19.331: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608565238s Aug 17 01:06:21.335: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612697198s Aug 17 01:06:23.338: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237": Phase="Running", Reason="", readiness=true. Elapsed: 6.615425766s Aug 17 01:06:25.343: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.620400767s STEP: Saw pod success Aug 17 01:06:25.343: INFO: Pod "downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237" satisfied condition "Succeeded or Failed" Aug 17 01:06:25.346: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237 container client-container: STEP: delete the pod Aug 17 01:06:25.391: INFO: Waiting for pod downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237 to disappear Aug 17 01:06:25.405: INFO: Pod downwardapi-volume-829a0a92-e49e-4170-8b6a-054ce102f237 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:25.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2258" for this suite. • [SLOW TEST:8.796 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":259,"skipped":4313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:25.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-cb31877e-787e-46a4-8b2e-305e98e2c342 STEP: Creating a pod to test consume configMaps Aug 17 01:06:25.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb" in namespace "configmap-4985" to be "Succeeded or Failed" Aug 17 01:06:25.549: INFO: Pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.318893ms Aug 17 01:06:27.672: INFO: Pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149777938s Aug 17 01:06:29.676: INFO: Pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb": Phase="Running", Reason="", readiness=true. Elapsed: 4.153020617s Aug 17 01:06:31.680: INFO: Pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157308245s STEP: Saw pod success Aug 17 01:06:31.680: INFO: Pod "pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb" satisfied condition "Succeeded or Failed" Aug 17 01:06:31.683: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb container configmap-volume-test: STEP: delete the pod Aug 17 01:06:31.700: INFO: Waiting for pod pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb to disappear Aug 17 01:06:31.719: INFO: Pod pod-configmaps-cd8abefa-578d-46ce-bfcf-09e213e95beb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:31.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4985" for this suite. • [SLOW TEST:6.312 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":260,"skipped":4351,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:31.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:06:31.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb" in namespace "downward-api-7226" to be "Succeeded or Failed" Aug 17 01:06:31.848: INFO: Pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.046574ms Aug 17 01:06:33.990: INFO: Pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144907759s Aug 17 01:06:35.995: INFO: Pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.149732701s Aug 17 01:06:37.999: INFO: Pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154053706s STEP: Saw pod success Aug 17 01:06:37.999: INFO: Pod "downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb" satisfied condition "Succeeded or Failed" Aug 17 01:06:38.002: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb container client-container: STEP: delete the pod Aug 17 01:06:38.189: INFO: Waiting for pod downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb to disappear Aug 17 01:06:38.195: INFO: Pod downwardapi-volume-e23e1d09-dea0-4776-a970-5ef4edc701bb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7226" for this suite. • [SLOW TEST:6.476 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":294,"completed":261,"skipped":4353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:38.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:06:38.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de" in namespace "projected-189" to be "Succeeded or Failed" Aug 17 01:06:38.282: INFO: Pod "downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de": Phase="Pending", Reason="", readiness=false. Elapsed: 18.314905ms Aug 17 01:06:40.286: INFO: Pod "downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022691771s Aug 17 01:06:42.291: INFO: Pod "downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027148479s STEP: Saw pod success Aug 17 01:06:42.291: INFO: Pod "downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de" satisfied condition "Succeeded or Failed" Aug 17 01:06:42.294: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de container client-container: STEP: delete the pod Aug 17 01:06:42.333: INFO: Waiting for pod downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de to disappear Aug 17 01:06:42.346: INFO: Pod downwardapi-volume-90eac049-5442-4548-9fc1-e05ec1d901de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:42.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-189" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":262,"skipped":4394,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:42.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-643eec37-e47d-4914-b008-4c6034f47db1 STEP: Creating secret with name s-test-opt-upd-547e946b-cd0e-46c4-85e6-c1e5605736cf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-643eec37-e47d-4914-b008-4c6034f47db1 STEP: Updating secret s-test-opt-upd-547e946b-cd0e-46c4-85e6-c1e5605736cf STEP: Creating secret with name s-test-opt-create-97a3e694-18d3-44f1-a5bc-d44cc8ea4526 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:50.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9583" for this suite. • [SLOW TEST:8.238 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":263,"skipped":4415,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:50.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-518/secret-test-2fc1c323-8b19-4a88-b62f-fe482737e951 STEP: Creating a pod to test consume secrets Aug 17 01:06:50.653: INFO: Waiting up to 5m0s for pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f" in namespace "secrets-518" to be "Succeeded or Failed" Aug 17 01:06:50.674: INFO: Pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.423652ms Aug 17 01:06:52.677: INFO: Pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024259984s Aug 17 01:06:54.681: INFO: Pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028181516s Aug 17 01:06:56.686: INFO: Pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0329247s STEP: Saw pod success Aug 17 01:06:56.686: INFO: Pod "pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f" satisfied condition "Succeeded or Failed" Aug 17 01:06:56.689: INFO: Trying to get logs from node latest-worker pod pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f container env-test: STEP: delete the pod Aug 17 01:06:56.748: INFO: Waiting for pod pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f to disappear Aug 17 01:06:56.870: INFO: Pod pod-configmaps-720d51d8-2847-4bc4-a7e1-d817e51d265f no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:56.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-518" for this suite. • [SLOW TEST:6.288 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":264,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:56.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 17 01:06:57.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config cluster-info' Aug 17 01:06:57.299: INFO: stderr: "" Aug 17 01:06:57.299: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:57.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2843" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":294,"completed":265,"skipped":4445,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:57.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:06:57.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6417" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":294,"completed":266,"skipped":4455,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:06:57.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 01:07:04.835: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:07:04.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5948" for this suite. • [SLOW TEST:7.092 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":294,"completed":267,"skipped":4467,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:07:04.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-330889e0-8303-4bc7-b508-0d4b1a56dd39 in namespace container-probe-9659 Aug 17 01:07:09.042: INFO: Started pod busybox-330889e0-8303-4bc7-b508-0d4b1a56dd39 in namespace container-probe-9659 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 01:07:09.047: INFO: Initial restart count of pod busybox-330889e0-8303-4bc7-b508-0d4b1a56dd39 is 0 Aug 17 01:08:01.858: INFO: Restart count of pod container-probe-9659/busybox-330889e0-8303-4bc7-b508-0d4b1a56dd39 is now 1 (52.811062581s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:08:01.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9659" for this suite. • [SLOW TEST:57.072 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":268,"skipped":4476,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:08:01.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:08:18.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2164" for this suite. • [SLOW TEST:16.549 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":294,"completed":269,"skipped":4490,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:08:18.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 17 01:08:33.430: INFO: 5 pods remaining Aug 17 01:08:33.430: INFO: 5 pods has nil DeletionTimestamp Aug 17 01:08:33.430: INFO: STEP: Gathering metrics W0817 01:08:36.949753 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 01:09:39.154: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 17 01:09:39.154: INFO: Deleting pod "simpletest-rc-to-be-deleted-247j6" in namespace "gc-3622" Aug 17 01:09:39.900: INFO: Deleting pod "simpletest-rc-to-be-deleted-66wzk" in namespace "gc-3622" Aug 17 01:09:41.327: INFO: Deleting pod "simpletest-rc-to-be-deleted-6ghhh" in namespace "gc-3622" Aug 17 01:09:41.956: INFO: Deleting pod "simpletest-rc-to-be-deleted-d4lj4" in namespace "gc-3622" Aug 17 01:09:42.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-lkwg9" in namespace "gc-3622" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:09:43.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3622" for this suite. • [SLOW TEST:85.311 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":294,"completed":270,"skipped":4501,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:09:43.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 01:09:51.949: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:09:52.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3849" for this suite. • [SLOW TEST:8.796 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":271,"skipped":4510,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:09:52.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 01:09:57.448: INFO: Successfully updated pod "annotationupdate6902ded2-ef81-4b9a-9287-53819e484f27" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:01.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1489" for this suite. • [SLOW TEST:8.932 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":272,"skipped":4527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:01.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 17 01:10:01.625: INFO: >>> kubeConfig: /root/.kube/config Aug 17 01:10:04.601: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:16.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8629" for this suite. • [SLOW TEST:15.029 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":294,"completed":273,"skipped":4551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:16.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 17 01:10:16.695: INFO: Waiting up to 5m0s for pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3" in namespace "containers-1636" to be "Succeeded or Failed" Aug 17 01:10:16.721: INFO: Pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.910532ms Aug 17 01:10:18.740: INFO: Pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044889666s Aug 17 01:10:20.744: INFO: Pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048698245s Aug 17 01:10:22.748: INFO: Pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053217618s STEP: Saw pod success Aug 17 01:10:22.748: INFO: Pod "client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3" satisfied condition "Succeeded or Failed" Aug 17 01:10:22.751: INFO: Trying to get logs from node latest-worker2 pod client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3 container test-container: STEP: delete the pod Aug 17 01:10:23.241: INFO: Waiting for pod client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3 to disappear Aug 17 01:10:23.286: INFO: Pod client-containers-2d0f6eff-cbc3-4905-bf24-2005155e66b3 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:23.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1636" for this suite. • [SLOW TEST:6.704 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":294,"completed":274,"skipped":4593,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:23.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 17 01:10:29.992: INFO: Successfully updated pod "adopt-release-7ds2p" STEP: Checking that the Job readopts the Pod Aug 17 01:10:29.992: INFO: Waiting up to 15m0s for pod "adopt-release-7ds2p" in namespace "job-6180" to be "adopted" Aug 17 01:10:30.012: INFO: Pod "adopt-release-7ds2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.109539ms Aug 17 01:10:32.016: INFO: Pod "adopt-release-7ds2p": Phase="Running", Reason="", readiness=true. Elapsed: 2.023933129s Aug 17 01:10:32.016: INFO: Pod "adopt-release-7ds2p" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 17 01:10:32.528: INFO: Successfully updated pod "adopt-release-7ds2p" STEP: Checking that the Job releases the Pod Aug 17 01:10:32.528: INFO: Waiting up to 15m0s for pod "adopt-release-7ds2p" in namespace "job-6180" to be "released" Aug 17 01:10:32.550: INFO: Pod "adopt-release-7ds2p": Phase="Running", Reason="", readiness=true. Elapsed: 21.875002ms Aug 17 01:10:34.553: INFO: Pod "adopt-release-7ds2p": Phase="Running", Reason="", readiness=true. Elapsed: 2.025722574s Aug 17 01:10:34.554: INFO: Pod "adopt-release-7ds2p" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:34.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6180" for this suite. • [SLOW TEST:11.268 seconds] [sig-apps] Job /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":294,"completed":275,"skipped":4615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:34.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9468 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9468 to expose endpoints map[] Aug 17 01:10:35.677: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Aug 17 01:10:36.681: INFO: successfully validated that service endpoint-test2 in namespace services-9468 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9468 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9468 to expose endpoints map[pod1:[80]] Aug 17 01:10:40.849: INFO: successfully validated that service endpoint-test2 in namespace services-9468 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9468 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9468 to expose endpoints map[pod1:[80] pod2:[80]] Aug 17 01:10:44.908: INFO: successfully validated that service endpoint-test2 in namespace services-9468 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9468 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9468 to expose endpoints map[pod2:[80]] Aug 17 01:10:45.023: INFO: successfully validated that service endpoint-test2 in namespace services-9468 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9468 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9468 to expose endpoints map[] Aug 17 01:10:46.037: INFO: successfully validated that service endpoint-test2 in namespace services-9468 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:46.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9468" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:11.679 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":294,"completed":276,"skipped":4663,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:46.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-515e784d-3de7-4026-92fb-06fb91935316 STEP: Creating a pod to test consume secrets Aug 17 01:10:47.443: INFO: Waiting up to 5m0s for pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a" in namespace "secrets-6480" to be "Succeeded or Failed" Aug 17 01:10:47.453: INFO: Pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207872ms Aug 17 01:10:49.457: INFO: Pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014055139s Aug 17 01:10:51.863: INFO: Pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419840006s Aug 17 01:10:53.867: INFO: Pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.423819056s STEP: Saw pod success Aug 17 01:10:53.867: INFO: Pod "pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a" satisfied condition "Succeeded or Failed" Aug 17 01:10:53.870: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a container secret-volume-test: STEP: delete the pod Aug 17 01:10:53.996: INFO: Waiting for pod pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a to disappear Aug 17 01:10:54.048: INFO: Pod pod-secrets-d759e8e5-e5fb-48f2-a9ea-61efff34e71a no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:10:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6480" for this suite. • [SLOW TEST:7.934 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":277,"skipped":4667,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:10:54.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:11:13.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8992" for this suite. STEP: Destroying namespace "nsdeletetest-3960" for this suite. Aug 17 01:11:13.166: INFO: Namespace nsdeletetest-3960 was already deleted STEP: Destroying namespace "nsdeletetest-7505" for this suite. • [SLOW TEST:18.994 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":294,"completed":278,"skipped":4670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:11:13.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 17 01:11:13.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2750' Aug 17 01:11:14.503: INFO: stderr: "" Aug 17 01:11:14.503: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 17 01:11:14.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2750' Aug 17 01:11:14.636: INFO: stderr: "" Aug 17 01:11:14.636: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Aug 17 01:11:19.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2750' Aug 17 01:11:19.881: INFO: stderr: "" Aug 17 01:11:19.881: INFO: stdout: "update-demo-nautilus-gmxhn update-demo-nautilus-k228w " Aug 17 01:11:19.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxhn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2750' Aug 17 01:11:20.001: INFO: stderr: "" Aug 17 01:11:20.001: INFO: stdout: "" Aug 17 01:11:20.001: INFO: update-demo-nautilus-gmxhn is created but not running Aug 17 01:11:25.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2750' Aug 17 01:11:25.118: INFO: stderr: "" Aug 17 01:11:25.118: INFO: stdout: "update-demo-nautilus-gmxhn update-demo-nautilus-k228w " Aug 17 01:11:25.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxhn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2750' Aug 17 01:11:25.218: INFO: stderr: "" Aug 17 01:11:25.218: INFO: stdout: "true" Aug 17 01:11:25.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxhn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2750' Aug 17 01:11:25.319: INFO: stderr: "" Aug 17 01:11:25.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 01:11:25.319: INFO: validating pod update-demo-nautilus-gmxhn Aug 17 01:11:25.322: INFO: got data: { "image": "nautilus.jpg" } Aug 17 01:11:25.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 01:11:25.322: INFO: update-demo-nautilus-gmxhn is verified up and running Aug 17 01:11:25.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k228w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2750' Aug 17 01:11:25.420: INFO: stderr: "" Aug 17 01:11:25.420: INFO: stdout: "true" Aug 17 01:11:25.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k228w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2750' Aug 17 01:11:25.517: INFO: stderr: "" Aug 17 01:11:25.517: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 01:11:25.517: INFO: validating pod update-demo-nautilus-k228w Aug 17 01:11:25.520: INFO: got data: { "image": "nautilus.jpg" } Aug 17 01:11:25.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 01:11:25.521: INFO: update-demo-nautilus-k228w is verified up and running STEP: using delete to clean up resources Aug 17 01:11:25.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2750' Aug 17 01:11:25.643: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 01:11:25.643: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 17 01:11:25.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2750' Aug 17 01:11:25.746: INFO: stderr: "No resources found in kubectl-2750 namespace.\n" Aug 17 01:11:25.746: INFO: stdout: "" Aug 17 01:11:25.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2750 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 01:11:25.849: INFO: stderr: "" Aug 17 01:11:25.849: INFO: stdout: "update-demo-nautilus-gmxhn\nupdate-demo-nautilus-k228w\n" Aug 17 01:11:26.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2750' Aug 17 01:11:26.667: INFO: stderr: "No resources found in kubectl-2750 namespace.\n" Aug 17 01:11:26.667: INFO: stdout: "" Aug 17 01:11:26.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2750 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 01:11:26.852: INFO: stderr: "" Aug 17 01:11:26.852: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:11:26.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2750" for this suite. • [SLOW TEST:13.717 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":294,"completed":279,"skipped":4702,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:11:26.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:11:39.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4101" for this suite. • [SLOW TEST:13.144 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":294,"completed":280,"skipped":4713,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:11:40.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 17 01:11:40.148: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix672945768/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:11:40.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1224" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":294,"completed":281,"skipped":4725,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:11:40.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:11:40.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4210" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":294,"completed":282,"skipped":4732,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:11:40.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 01:11:40.795: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2252 I0817 01:11:40.816398 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2252, replica count: 1 I0817 01:11:41.866718 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:11:42.866993 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:11:43.867236 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:11:44.867458 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 01:11:45.867695 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 01:11:46.013: INFO: Created: latency-svc-7j9mv Aug 17 01:11:46.032: INFO: Got endpoints: latency-svc-7j9mv [64.035988ms] Aug 17 01:11:46.142: INFO: Created: latency-svc-5bgzw Aug 17 01:11:46.186: INFO: Got endpoints: latency-svc-5bgzw [154.427848ms] Aug 17 01:11:46.213: INFO: Created: latency-svc-wdfzp Aug 17 01:11:46.231: INFO: Got endpoints: latency-svc-wdfzp [199.556727ms] Aug 17 01:11:46.253: INFO: Created: latency-svc-88692 Aug 17 01:11:46.271: INFO: Got endpoints: latency-svc-88692 [239.445249ms] Aug 17 01:11:46.318: INFO: Created: latency-svc-rxm2b Aug 17 01:11:46.327: INFO: Got endpoints: latency-svc-rxm2b [295.558881ms] Aug 17 01:11:46.370: INFO: Created: latency-svc-mhh4v Aug 17 01:11:46.393: INFO: Got endpoints: latency-svc-mhh4v [361.593696ms] Aug 17 01:11:46.516: INFO: Created: latency-svc-8jf2m Aug 17 01:11:46.532: INFO: Got endpoints: latency-svc-8jf2m [500.592963ms] Aug 17 01:11:46.610: INFO: Created: latency-svc-sb6j2 Aug 17 01:11:46.665: INFO: Got endpoints: latency-svc-sb6j2 [633.419899ms] Aug 17 01:11:46.700: INFO: Created: latency-svc-mvf7j Aug 17 01:11:46.717: INFO: Got endpoints: latency-svc-mvf7j [684.931062ms] Aug 17 01:11:46.763: INFO: Created: latency-svc-4zg88 Aug 17 01:11:46.820: INFO: Got endpoints: latency-svc-4zg88 [788.395532ms] Aug 17 01:11:46.869: INFO: Created: latency-svc-kkpqj Aug 17 01:11:46.976: INFO: Got endpoints: latency-svc-kkpqj [944.486802ms] Aug 17 01:11:47.000: INFO: Created: latency-svc-tg822 Aug 17 01:11:47.552: INFO: Got endpoints: latency-svc-tg822 [1.519584522s] Aug 17 01:11:47.814: INFO: Created: latency-svc-5wlcx Aug 17 01:11:47.858: INFO: Got endpoints: latency-svc-5wlcx [1.826143332s] Aug 17 01:11:48.283: INFO: Created: latency-svc-jnpqt Aug 17 01:11:48.474: INFO: Got endpoints: latency-svc-jnpqt [2.441575474s] Aug 17 01:11:48.513: INFO: Created: latency-svc-6gw4j Aug 17 01:11:48.549: INFO: Got endpoints: latency-svc-6gw4j [2.516931392s] Aug 17 01:11:48.618: INFO: Created: latency-svc-mjs4k Aug 17 01:11:48.642: INFO: Got endpoints: latency-svc-mjs4k [2.610201328s] Aug 17 01:11:48.691: INFO: Created: latency-svc-qwd9s Aug 17 01:11:48.755: INFO: Got endpoints: latency-svc-qwd9s [2.56924629s] Aug 17 01:11:48.792: INFO: Created: latency-svc-j8z86 Aug 17 01:11:48.808: INFO: Got endpoints: latency-svc-j8z86 [2.57655002s] Aug 17 01:11:48.850: INFO: Created: latency-svc-vptvw Aug 17 01:11:48.919: INFO: Got endpoints: latency-svc-vptvw [2.647245301s] Aug 17 01:11:49.109: INFO: Created: latency-svc-57mkn Aug 17 01:11:49.312: INFO: Got endpoints: latency-svc-57mkn [2.985148702s] Aug 17 01:11:49.339: INFO: Created: latency-svc-nx7ht Aug 17 01:11:49.360: INFO: Got endpoints: latency-svc-nx7ht [440.860118ms] Aug 17 01:11:49.399: INFO: Created: latency-svc-dffb4 Aug 17 01:11:49.438: INFO: Got endpoints: latency-svc-dffb4 [3.044160587s] Aug 17 01:11:49.449: INFO: Created: latency-svc-4b29f Aug 17 01:11:49.462: INFO: Got endpoints: latency-svc-4b29f [2.929433782s] Aug 17 01:11:49.486: INFO: Created: latency-svc-j2cmm Aug 17 01:11:49.498: INFO: Got endpoints: latency-svc-j2cmm [2.832762162s] Aug 17 01:11:49.526: INFO: Created: latency-svc-8bzjt Aug 17 01:11:49.569: INFO: Got endpoints: latency-svc-8bzjt [2.852372812s] Aug 17 01:11:49.578: INFO: Created: latency-svc-mhgw8 Aug 17 01:11:49.607: INFO: Got endpoints: latency-svc-mhgw8 [2.786628187s] Aug 17 01:11:49.666: INFO: Created: latency-svc-kwp9m Aug 17 01:11:49.720: INFO: Got endpoints: latency-svc-kwp9m [2.743029531s] Aug 17 01:11:49.725: INFO: Created: latency-svc-h27qx Aug 17 01:11:49.808: INFO: Got endpoints: latency-svc-h27qx [2.256186644s] Aug 17 01:11:50.012: INFO: Created: latency-svc-fw5j2 Aug 17 01:11:50.157: INFO: Got endpoints: latency-svc-fw5j2 [2.299013603s] Aug 17 01:11:50.218: INFO: Created: latency-svc-q5prv Aug 17 01:11:50.307: INFO: Got endpoints: latency-svc-q5prv [1.833016565s] Aug 17 01:11:50.370: INFO: Created: latency-svc-42wzw Aug 17 01:11:50.389: INFO: Got endpoints: latency-svc-42wzw [1.840779185s] Aug 17 01:11:50.468: INFO: Created: latency-svc-kflb6 Aug 17 01:11:50.512: INFO: Got endpoints: latency-svc-kflb6 [1.87005711s] Aug 17 01:11:50.515: INFO: Created: latency-svc-nvw8j Aug 17 01:11:50.544: INFO: Got endpoints: latency-svc-nvw8j [1.789018624s] Aug 17 01:11:50.618: INFO: Created: latency-svc-784hr Aug 17 01:11:50.630: INFO: Got endpoints: latency-svc-784hr [1.822326867s] Aug 17 01:11:50.656: INFO: Created: latency-svc-xwf9h Aug 17 01:11:50.673: INFO: Got endpoints: latency-svc-xwf9h [1.360135835s] Aug 17 01:11:50.698: INFO: Created: latency-svc-wssdd Aug 17 01:11:50.715: INFO: Got endpoints: latency-svc-wssdd [1.354995323s] Aug 17 01:11:50.786: INFO: Created: latency-svc-gcsf6 Aug 17 01:11:50.790: INFO: Got endpoints: latency-svc-gcsf6 [1.352236054s] Aug 17 01:11:50.851: INFO: Created: latency-svc-r78dr Aug 17 01:11:50.866: INFO: Got endpoints: latency-svc-r78dr [1.403487826s] Aug 17 01:11:50.941: INFO: Created: latency-svc-p6nwp Aug 17 01:11:50.961: INFO: Got endpoints: latency-svc-p6nwp [1.462962509s] Aug 17 01:11:50.992: INFO: Created: latency-svc-gm2nq Aug 17 01:11:51.010: INFO: Got endpoints: latency-svc-gm2nq [1.440585468s] Aug 17 01:11:51.097: INFO: Created: latency-svc-tc4kf Aug 17 01:11:51.102: INFO: Got endpoints: latency-svc-tc4kf [1.494382803s] Aug 17 01:11:51.144: INFO: Created: latency-svc-rk4r7 Aug 17 01:11:51.178: INFO: Got endpoints: latency-svc-rk4r7 [1.45800094s] Aug 17 01:11:51.246: INFO: Created: latency-svc-mpv2h Aug 17 01:11:51.256: INFO: Got endpoints: latency-svc-mpv2h [1.4483851s] Aug 17 01:11:51.279: INFO: Created: latency-svc-grrqk Aug 17 01:11:51.305: INFO: Got endpoints: latency-svc-grrqk [1.147144579s] Aug 17 01:11:51.333: INFO: Created: latency-svc-7r56z Aug 17 01:11:51.414: INFO: Got endpoints: latency-svc-7r56z [1.107463196s] Aug 17 01:11:51.416: INFO: Created: latency-svc-27rpg Aug 17 01:11:51.430: INFO: Got endpoints: latency-svc-27rpg [1.040877476s] Aug 17 01:11:51.457: INFO: Created: latency-svc-6fkxl Aug 17 01:11:51.467: INFO: Got endpoints: latency-svc-6fkxl [954.248183ms] Aug 17 01:11:51.489: INFO: Created: latency-svc-c55cp Aug 17 01:11:51.503: INFO: Got endpoints: latency-svc-c55cp [958.923617ms] Aug 17 01:11:51.564: INFO: Created: latency-svc-gvjns Aug 17 01:11:51.592: INFO: Got endpoints: latency-svc-gvjns [961.493972ms] Aug 17 01:11:51.593: INFO: Created: latency-svc-969tc Aug 17 01:11:51.609: INFO: Got endpoints: latency-svc-969tc [936.216616ms] Aug 17 01:11:51.640: INFO: Created: latency-svc-98xxg Aug 17 01:11:51.654: INFO: Got endpoints: latency-svc-98xxg [939.356508ms] Aug 17 01:11:51.775: INFO: Created: latency-svc-lkmtz Aug 17 01:11:51.792: INFO: Got endpoints: latency-svc-lkmtz [1.002536319s] Aug 17 01:11:51.814: INFO: Created: latency-svc-2x77w Aug 17 01:11:51.851: INFO: Got endpoints: latency-svc-2x77w [985.53469ms] Aug 17 01:11:51.873: INFO: Created: latency-svc-fwkjr Aug 17 01:11:51.912: INFO: Got endpoints: latency-svc-fwkjr [950.866639ms] Aug 17 01:11:51.994: INFO: Created: latency-svc-wkbhd Aug 17 01:11:51.999: INFO: Got endpoints: latency-svc-wkbhd [988.931135ms] Aug 17 01:11:52.033: INFO: Created: latency-svc-sdfg4 Aug 17 01:11:52.046: INFO: Got endpoints: latency-svc-sdfg4 [944.360223ms] Aug 17 01:11:52.077: INFO: Created: latency-svc-7nznh Aug 17 01:11:52.150: INFO: Got endpoints: latency-svc-7nznh [972.725373ms] Aug 17 01:11:52.152: INFO: Created: latency-svc-tg76p Aug 17 01:11:52.172: INFO: Got endpoints: latency-svc-tg76p [915.114375ms] Aug 17 01:11:52.201: INFO: Created: latency-svc-7b8gv Aug 17 01:11:52.214: INFO: Got endpoints: latency-svc-7b8gv [909.316849ms] Aug 17 01:11:52.239: INFO: Created: latency-svc-g5qwb Aug 17 01:11:52.336: INFO: Got endpoints: latency-svc-g5qwb [921.621791ms] Aug 17 01:11:52.344: INFO: Created: latency-svc-xc5dn Aug 17 01:11:52.365: INFO: Got endpoints: latency-svc-xc5dn [934.338281ms] Aug 17 01:11:52.386: INFO: Created: latency-svc-9w9bh Aug 17 01:11:52.395: INFO: Got endpoints: latency-svc-9w9bh [928.123741ms] Aug 17 01:11:52.417: INFO: Created: latency-svc-cm9w7 Aug 17 01:11:52.425: INFO: Got endpoints: latency-svc-cm9w7 [921.36552ms] Aug 17 01:11:52.479: INFO: Created: latency-svc-689wv Aug 17 01:11:52.491: INFO: Got endpoints: latency-svc-689wv [898.665795ms] Aug 17 01:11:52.515: INFO: Created: latency-svc-v7xcr Aug 17 01:11:52.539: INFO: Got endpoints: latency-svc-v7xcr [929.803309ms] Aug 17 01:11:52.566: INFO: Created: latency-svc-v4sbk Aug 17 01:11:52.617: INFO: Got endpoints: latency-svc-v4sbk [962.871704ms] Aug 17 01:11:52.657: INFO: Created: latency-svc-ms7c8 Aug 17 01:11:52.666: INFO: Got endpoints: latency-svc-ms7c8 [873.113727ms] Aug 17 01:11:52.692: INFO: Created: latency-svc-gg26b Aug 17 01:11:52.707: INFO: Got endpoints: latency-svc-gg26b [855.485926ms] Aug 17 01:11:52.755: INFO: Created: latency-svc-j8rbk Aug 17 01:11:52.785: INFO: Got endpoints: latency-svc-j8rbk [872.767813ms] Aug 17 01:11:52.786: INFO: Created: latency-svc-bt4wj Aug 17 01:11:52.816: INFO: Got endpoints: latency-svc-bt4wj [816.947923ms] Aug 17 01:11:52.848: INFO: Created: latency-svc-lbq99 Aug 17 01:11:52.917: INFO: Got endpoints: latency-svc-lbq99 [871.086944ms] Aug 17 01:11:52.922: INFO: Created: latency-svc-wpmgn Aug 17 01:11:52.938: INFO: Got endpoints: latency-svc-wpmgn [787.166001ms] Aug 17 01:11:52.959: INFO: Created: latency-svc-dlcmq Aug 17 01:11:52.998: INFO: Got endpoints: latency-svc-dlcmq [826.533117ms] Aug 17 01:11:53.067: INFO: Created: latency-svc-dxb46 Aug 17 01:11:53.076: INFO: Got endpoints: latency-svc-dxb46 [862.131308ms] Aug 17 01:11:53.101: INFO: Created: latency-svc-64kj6 Aug 17 01:11:53.119: INFO: Got endpoints: latency-svc-64kj6 [783.482653ms] Aug 17 01:11:53.152: INFO: Created: latency-svc-rsfc7 Aug 17 01:11:53.246: INFO: Got endpoints: latency-svc-rsfc7 [881.534626ms] Aug 17 01:11:53.260: INFO: Created: latency-svc-xwgx2 Aug 17 01:11:53.275: INFO: Got endpoints: latency-svc-xwgx2 [880.460015ms] Aug 17 01:11:53.305: INFO: Created: latency-svc-vk52b Aug 17 01:11:53.318: INFO: Got endpoints: latency-svc-vk52b [893.063825ms] Aug 17 01:11:53.445: INFO: Created: latency-svc-q4mff Aug 17 01:11:53.461: INFO: Got endpoints: latency-svc-q4mff [970.151776ms] Aug 17 01:11:53.507: INFO: Created: latency-svc-wphtd Aug 17 01:11:53.522: INFO: Got endpoints: latency-svc-wphtd [983.435667ms] Aug 17 01:11:53.624: INFO: Created: latency-svc-ktkxd Aug 17 01:11:53.627: INFO: Got endpoints: latency-svc-ktkxd [1.009990854s] Aug 17 01:11:53.712: INFO: Created: latency-svc-dtgk2 Aug 17 01:11:53.773: INFO: Got endpoints: latency-svc-dtgk2 [1.107712928s] Aug 17 01:11:53.777: INFO: Created: latency-svc-q7jwp Aug 17 01:11:53.786: INFO: Got endpoints: latency-svc-q7jwp [1.07934469s] Aug 17 01:11:53.810: INFO: Created: latency-svc-5g85z Aug 17 01:11:53.819: INFO: Got endpoints: latency-svc-5g85z [1.034107895s] Aug 17 01:11:53.850: INFO: Created: latency-svc-9gp7k Aug 17 01:11:53.965: INFO: Got endpoints: latency-svc-9gp7k [1.148553072s] Aug 17 01:11:54.000: INFO: Created: latency-svc-zvnm9 Aug 17 01:11:54.023: INFO: Got endpoints: latency-svc-zvnm9 [1.106209694s] Aug 17 01:11:54.045: INFO: Created: latency-svc-xvg6b Aug 17 01:11:54.059: INFO: Got endpoints: latency-svc-xvg6b [1.121800844s] Aug 17 01:11:54.120: INFO: Created: latency-svc-47lt7 Aug 17 01:11:54.154: INFO: Got endpoints: latency-svc-47lt7 [1.15560191s] Aug 17 01:11:54.182: INFO: Created: latency-svc-5rgbz Aug 17 01:11:54.198: INFO: Got endpoints: latency-svc-5rgbz [1.121836873s] Aug 17 01:11:54.307: INFO: Created: latency-svc-wf82b Aug 17 01:11:54.318: INFO: Got endpoints: latency-svc-wf82b [1.198453433s] Aug 17 01:11:54.373: INFO: Created: latency-svc-xn6wv Aug 17 01:11:54.451: INFO: Got endpoints: latency-svc-xn6wv [1.204465276s] Aug 17 01:11:54.453: INFO: Created: latency-svc-wbjhx Aug 17 01:11:54.462: INFO: Got endpoints: latency-svc-wbjhx [1.187024663s] Aug 17 01:11:54.544: INFO: Created: latency-svc-qp48g Aug 17 01:11:54.630: INFO: Got endpoints: latency-svc-qp48g [1.311674869s] Aug 17 01:11:54.638: INFO: Created: latency-svc-8ddpc Aug 17 01:11:54.649: INFO: Got endpoints: latency-svc-8ddpc [1.187200907s] Aug 17 01:11:54.698: INFO: Created: latency-svc-bnnf8 Aug 17 01:11:54.715: INFO: Got endpoints: latency-svc-bnnf8 [1.192852446s] Aug 17 01:11:54.817: INFO: Created: latency-svc-f7m9c Aug 17 01:11:54.848: INFO: Got endpoints: latency-svc-f7m9c [1.221234982s] Aug 17 01:11:54.901: INFO: Created: latency-svc-9fmbg Aug 17 01:11:54.989: INFO: Got endpoints: latency-svc-9fmbg [1.215245154s] Aug 17 01:11:55.050: INFO: Created: latency-svc-vf5gc Aug 17 01:11:55.144: INFO: Got endpoints: latency-svc-vf5gc [1.358202837s] Aug 17 01:11:55.219: INFO: Created: latency-svc-4pnhb Aug 17 01:11:55.306: INFO: Got endpoints: latency-svc-4pnhb [1.487386874s] Aug 17 01:11:55.366: INFO: Created: latency-svc-7kvch Aug 17 01:11:55.382: INFO: Got endpoints: latency-svc-7kvch [1.417207634s] Aug 17 01:11:55.516: INFO: Created: latency-svc-bnkc4 Aug 17 01:11:55.527: INFO: Got endpoints: latency-svc-bnkc4 [1.503181674s] Aug 17 01:11:55.605: INFO: Created: latency-svc-drrq8 Aug 17 01:11:55.750: INFO: Got endpoints: latency-svc-drrq8 [1.690148981s] Aug 17 01:11:55.753: INFO: Created: latency-svc-74hvq Aug 17 01:11:55.820: INFO: Got endpoints: latency-svc-74hvq [1.666151601s] Aug 17 01:11:55.917: INFO: Created: latency-svc-49vz7 Aug 17 01:11:55.952: INFO: Got endpoints: latency-svc-49vz7 [1.754070718s] Aug 17 01:11:56.097: INFO: Created: latency-svc-kkzv4 Aug 17 01:11:56.143: INFO: Got endpoints: latency-svc-kkzv4 [1.824478703s] Aug 17 01:11:56.313: INFO: Created: latency-svc-25xk5 Aug 17 01:11:56.317: INFO: Got endpoints: latency-svc-25xk5 [1.8661592s] Aug 17 01:11:56.523: INFO: Created: latency-svc-bfkc5 Aug 17 01:11:56.527: INFO: Got endpoints: latency-svc-bfkc5 [2.064915469s] Aug 17 01:11:56.578: INFO: Created: latency-svc-zthw2 Aug 17 01:11:56.598: INFO: Got endpoints: latency-svc-zthw2 [1.968448002s] Aug 17 01:11:56.769: INFO: Created: latency-svc-wz6jp Aug 17 01:11:56.819: INFO: Got endpoints: latency-svc-wz6jp [2.170616843s] Aug 17 01:11:56.899: INFO: Created: latency-svc-h5k5s Aug 17 01:11:56.902: INFO: Got endpoints: latency-svc-h5k5s [2.186732231s] Aug 17 01:11:56.947: INFO: Created: latency-svc-78l9h Aug 17 01:11:56.965: INFO: Got endpoints: latency-svc-78l9h [2.116096463s] Aug 17 01:11:56.989: INFO: Created: latency-svc-97h7v Aug 17 01:11:57.054: INFO: Got endpoints: latency-svc-97h7v [2.06533508s] Aug 17 01:11:57.130: INFO: Created: latency-svc-hgpqj Aug 17 01:11:57.144: INFO: Got endpoints: latency-svc-hgpqj [1.999919359s] Aug 17 01:11:57.218: INFO: Created: latency-svc-5sgfk Aug 17 01:11:57.248: INFO: Got endpoints: latency-svc-5sgfk [1.941339007s] Aug 17 01:11:57.305: INFO: Created: latency-svc-82zps Aug 17 01:11:57.366: INFO: Got endpoints: latency-svc-82zps [1.984528077s] Aug 17 01:11:57.389: INFO: Created: latency-svc-28vhv Aug 17 01:11:57.409: INFO: Got endpoints: latency-svc-28vhv [1.882677874s] Aug 17 01:11:57.440: INFO: Created: latency-svc-664p4 Aug 17 01:11:57.461: INFO: Got endpoints: latency-svc-664p4 [1.711224094s] Aug 17 01:11:57.518: INFO: Created: latency-svc-42tkc Aug 17 01:11:57.538: INFO: Got endpoints: latency-svc-42tkc [1.718017888s] Aug 17 01:11:57.587: INFO: Created: latency-svc-swj22 Aug 17 01:11:57.738: INFO: Got endpoints: latency-svc-swj22 [1.785614654s] Aug 17 01:11:57.773: INFO: Created: latency-svc-fnmlm Aug 17 01:11:57.803: INFO: Got endpoints: latency-svc-fnmlm [1.660626459s] Aug 17 01:11:57.929: INFO: Created: latency-svc-7vttd Aug 17 01:11:57.974: INFO: Got endpoints: latency-svc-7vttd [1.656708114s] Aug 17 01:11:58.022: INFO: Created: latency-svc-d692x Aug 17 01:11:58.127: INFO: Got endpoints: latency-svc-d692x [1.599382266s] Aug 17 01:11:58.128: INFO: Created: latency-svc-xmndq Aug 17 01:11:58.142: INFO: Got endpoints: latency-svc-xmndq [1.544038349s] Aug 17 01:11:58.199: INFO: Created: latency-svc-zqdbr Aug 17 01:11:58.294: INFO: Got endpoints: latency-svc-zqdbr [1.474561124s] Aug 17 01:11:58.311: INFO: Created: latency-svc-965wl Aug 17 01:11:58.344: INFO: Got endpoints: latency-svc-965wl [1.441721049s] Aug 17 01:11:58.382: INFO: Created: latency-svc-gdrl7 Aug 17 01:11:58.474: INFO: Got endpoints: latency-svc-gdrl7 [1.509402282s] Aug 17 01:11:58.477: INFO: Created: latency-svc-7kglg Aug 17 01:11:58.487: INFO: Got endpoints: latency-svc-7kglg [1.432908494s] Aug 17 01:11:58.505: INFO: Created: latency-svc-dgx6m Aug 17 01:11:58.536: INFO: Got endpoints: latency-svc-dgx6m [1.391258831s] Aug 17 01:11:58.568: INFO: Created: latency-svc-w6s7t Aug 17 01:11:58.636: INFO: Got endpoints: latency-svc-w6s7t [1.388561854s] Aug 17 01:11:58.692: INFO: Created: latency-svc-s7rd9 Aug 17 01:11:58.705: INFO: Got endpoints: latency-svc-s7rd9 [1.338198198s] Aug 17 01:11:58.727: INFO: Created: latency-svc-xbcx6 Aug 17 01:11:58.797: INFO: Got endpoints: latency-svc-xbcx6 [1.388030594s] Aug 17 01:11:58.800: INFO: Created: latency-svc-qp4wg Aug 17 01:11:58.806: INFO: Got endpoints: latency-svc-qp4wg [1.345198587s] Aug 17 01:11:58.832: INFO: Created: latency-svc-6prqs Aug 17 01:11:58.849: INFO: Got endpoints: latency-svc-6prqs [1.310911928s] Aug 17 01:11:58.874: INFO: Created: latency-svc-dkzll Aug 17 01:11:58.886: INFO: Got endpoints: latency-svc-dkzll [1.147698843s] Aug 17 01:11:58.941: INFO: Created: latency-svc-265g6 Aug 17 01:11:58.945: INFO: Got endpoints: latency-svc-265g6 [1.1419811s] Aug 17 01:11:58.970: INFO: Created: latency-svc-mrg5d Aug 17 01:11:59.001: INFO: Got endpoints: latency-svc-mrg5d [1.026675557s] Aug 17 01:11:59.031: INFO: Created: latency-svc-kj29m Aug 17 01:11:59.138: INFO: Got endpoints: latency-svc-kj29m [1.011530005s] Aug 17 01:11:59.142: INFO: Created: latency-svc-d7pfx Aug 17 01:11:59.193: INFO: Got endpoints: latency-svc-d7pfx [1.050217035s] Aug 17 01:11:59.348: INFO: Created: latency-svc-4gwrr Aug 17 01:11:59.353: INFO: Got endpoints: latency-svc-4gwrr [1.058610171s] Aug 17 01:11:59.383: INFO: Created: latency-svc-jgjk5 Aug 17 01:11:59.396: INFO: Got endpoints: latency-svc-jgjk5 [1.052195097s] Aug 17 01:11:59.431: INFO: Created: latency-svc-bp8tr Aug 17 01:11:59.492: INFO: Got endpoints: latency-svc-bp8tr [1.017926683s] Aug 17 01:11:59.517: INFO: Created: latency-svc-ncvlw Aug 17 01:11:59.529: INFO: Got endpoints: latency-svc-ncvlw [1.041908331s] Aug 17 01:11:59.562: INFO: Created: latency-svc-c49zz Aug 17 01:11:59.577: INFO: Got endpoints: latency-svc-c49zz [1.041023665s] Aug 17 01:11:59.642: INFO: Created: latency-svc-8rjgd Aug 17 01:11:59.647: INFO: Got endpoints: latency-svc-8rjgd [1.009984269s] Aug 17 01:11:59.684: INFO: Created: latency-svc-7j7hm Aug 17 01:11:59.709: INFO: Got endpoints: latency-svc-7j7hm [1.003944737s] Aug 17 01:11:59.822: INFO: Created: latency-svc-d9rhs Aug 17 01:11:59.825: INFO: Got endpoints: latency-svc-d9rhs [1.027944578s] Aug 17 01:11:59.973: INFO: Created: latency-svc-92plg Aug 17 01:12:00.029: INFO: Got endpoints: latency-svc-92plg [1.223144945s] Aug 17 01:12:00.481: INFO: Created: latency-svc-nfvtj Aug 17 01:12:00.750: INFO: Got endpoints: latency-svc-nfvtj [1.900836649s] Aug 17 01:12:00.753: INFO: Created: latency-svc-9tmn9 Aug 17 01:12:00.763: INFO: Got endpoints: latency-svc-9tmn9 [1.877474792s] Aug 17 01:12:00.970: INFO: Created: latency-svc-pm6hl Aug 17 01:12:01.046: INFO: Got endpoints: latency-svc-pm6hl [2.100462185s] Aug 17 01:12:01.104: INFO: Created: latency-svc-m86rw Aug 17 01:12:01.111: INFO: Got endpoints: latency-svc-m86rw [2.110658636s] Aug 17 01:12:01.270: INFO: Created: latency-svc-5gtmg Aug 17 01:12:01.274: INFO: Got endpoints: latency-svc-5gtmg [2.135707169s] Aug 17 01:12:01.364: INFO: Created: latency-svc-4nkcb Aug 17 01:12:01.438: INFO: Got endpoints: latency-svc-4nkcb [2.245128715s] Aug 17 01:12:01.440: INFO: Created: latency-svc-bhqfr Aug 17 01:12:01.447: INFO: Got endpoints: latency-svc-bhqfr [2.09466159s] Aug 17 01:12:01.467: INFO: Created: latency-svc-zvlds Aug 17 01:12:01.473: INFO: Got endpoints: latency-svc-zvlds [2.076977769s] Aug 17 01:12:01.522: INFO: Created: latency-svc-scpfw Aug 17 01:12:01.624: INFO: Got endpoints: latency-svc-scpfw [2.132202843s] Aug 17 01:12:01.659: INFO: Created: latency-svc-56n6l Aug 17 01:12:01.677: INFO: Got endpoints: latency-svc-56n6l [2.148045992s] Aug 17 01:12:01.710: INFO: Created: latency-svc-qhz6h Aug 17 01:12:01.719: INFO: Got endpoints: latency-svc-qhz6h [2.142216626s] Aug 17 01:12:01.785: INFO: Created: latency-svc-5gwx4 Aug 17 01:12:01.810: INFO: Got endpoints: latency-svc-5gwx4 [2.163067112s] Aug 17 01:12:01.883: INFO: Created: latency-svc-7scj4 Aug 17 01:12:02.013: INFO: Got endpoints: latency-svc-7scj4 [2.304680563s] Aug 17 01:12:02.265: INFO: Created: latency-svc-v44qn Aug 17 01:12:02.289: INFO: Got endpoints: latency-svc-v44qn [2.463611311s] Aug 17 01:12:02.420: INFO: Created: latency-svc-hxt6v Aug 17 01:12:02.436: INFO: Got endpoints: latency-svc-hxt6v [2.406933669s] Aug 17 01:12:02.482: INFO: Created: latency-svc-sxgmc Aug 17 01:12:02.499: INFO: Got endpoints: latency-svc-sxgmc [1.749328776s] Aug 17 01:12:02.570: INFO: Created: latency-svc-cl7nk Aug 17 01:12:02.629: INFO: Got endpoints: latency-svc-cl7nk [1.866062445s] Aug 17 01:12:02.630: INFO: Created: latency-svc-9dpvx Aug 17 01:12:02.646: INFO: Got endpoints: latency-svc-9dpvx [1.600253864s] Aug 17 01:12:02.732: INFO: Created: latency-svc-dkkkl Aug 17 01:12:02.764: INFO: Got endpoints: latency-svc-dkkkl [1.652250094s] Aug 17 01:12:02.764: INFO: Created: latency-svc-lmj46 Aug 17 01:12:02.796: INFO: Got endpoints: latency-svc-lmj46 [1.522283799s] Aug 17 01:12:02.883: INFO: Created: latency-svc-c658b Aug 17 01:12:02.887: INFO: Got endpoints: latency-svc-c658b [1.449000429s] Aug 17 01:12:02.924: INFO: Created: latency-svc-8qfms Aug 17 01:12:02.940: INFO: Got endpoints: latency-svc-8qfms [1.492551884s] Aug 17 01:12:02.979: INFO: Created: latency-svc-wljbx Aug 17 01:12:03.067: INFO: Got endpoints: latency-svc-wljbx [1.593724995s] Aug 17 01:12:03.070: INFO: Created: latency-svc-mpzvl Aug 17 01:12:03.078: INFO: Got endpoints: latency-svc-mpzvl [1.45352867s] Aug 17 01:12:03.132: INFO: Created: latency-svc-mv66p Aug 17 01:12:03.151: INFO: Got endpoints: latency-svc-mv66p [1.473227544s] Aug 17 01:12:03.217: INFO: Created: latency-svc-7dfmw Aug 17 01:12:03.262: INFO: Got endpoints: latency-svc-7dfmw [1.5424229s] Aug 17 01:12:03.367: INFO: Created: latency-svc-r5lpt Aug 17 01:12:03.379: INFO: Got endpoints: latency-svc-r5lpt [1.569314474s] Aug 17 01:12:03.568: INFO: Created: latency-svc-5dtbc Aug 17 01:12:03.572: INFO: Got endpoints: latency-svc-5dtbc [1.558150847s] Aug 17 01:12:03.742: INFO: Created: latency-svc-8r5c4 Aug 17 01:12:03.787: INFO: Got endpoints: latency-svc-8r5c4 [1.497554564s] Aug 17 01:12:04.077: INFO: Created: latency-svc-kvssn Aug 17 01:12:04.099: INFO: Got endpoints: latency-svc-kvssn [1.662223432s] Aug 17 01:12:04.198: INFO: Created: latency-svc-b2nqq Aug 17 01:12:04.234: INFO: Got endpoints: latency-svc-b2nqq [1.734694047s] Aug 17 01:12:04.282: INFO: Created: latency-svc-rmzlr Aug 17 01:12:04.296: INFO: Got endpoints: latency-svc-rmzlr [1.666661517s] Aug 17 01:12:04.348: INFO: Created: latency-svc-llrhl Aug 17 01:12:04.353: INFO: Got endpoints: latency-svc-llrhl [1.707004399s] Aug 17 01:12:04.375: INFO: Created: latency-svc-28s2k Aug 17 01:12:04.387: INFO: Got endpoints: latency-svc-28s2k [1.623414644s] Aug 17 01:12:04.406: INFO: Created: latency-svc-b2mjm Aug 17 01:12:04.432: INFO: Got endpoints: latency-svc-b2mjm [1.635814229s] Aug 17 01:12:04.492: INFO: Created: latency-svc-vhf2v Aug 17 01:12:04.507: INFO: Got endpoints: latency-svc-vhf2v [1.620227173s] Aug 17 01:12:04.565: INFO: Created: latency-svc-bxxsp Aug 17 01:12:04.852: INFO: Got endpoints: latency-svc-bxxsp [1.911899813s] Aug 17 01:12:04.853: INFO: Created: latency-svc-prm7x Aug 17 01:12:04.867: INFO: Got endpoints: latency-svc-prm7x [1.800371928s] Aug 17 01:12:05.007: INFO: Created: latency-svc-2htmb Aug 17 01:12:05.035: INFO: Got endpoints: latency-svc-2htmb [1.957437436s] Aug 17 01:12:05.059: INFO: Created: latency-svc-vtjdz Aug 17 01:12:05.072: INFO: Got endpoints: latency-svc-vtjdz [1.921559255s] Aug 17 01:12:05.105: INFO: Created: latency-svc-ccmbv Aug 17 01:12:05.168: INFO: Got endpoints: latency-svc-ccmbv [1.906064617s] Aug 17 01:12:05.182: INFO: Created: latency-svc-6549x Aug 17 01:12:05.204: INFO: Got endpoints: latency-svc-6549x [1.825130867s] Aug 17 01:12:05.312: INFO: Created: latency-svc-fwfhj Aug 17 01:12:05.340: INFO: Created: latency-svc-9qhrn Aug 17 01:12:05.340: INFO: Got endpoints: latency-svc-fwfhj [1.768098525s] Aug 17 01:12:05.582: INFO: Got endpoints: latency-svc-9qhrn [1.795577509s] Aug 17 01:12:05.633: INFO: Created: latency-svc-kqsrw Aug 17 01:12:05.788: INFO: Got endpoints: latency-svc-kqsrw [1.689608174s] Aug 17 01:12:05.849: INFO: Created: latency-svc-wbccn Aug 17 01:12:05.882: INFO: Got endpoints: latency-svc-wbccn [1.647685991s] Aug 17 01:12:05.946: INFO: Created: latency-svc-592zt Aug 17 01:12:05.961: INFO: Got endpoints: latency-svc-592zt [1.665067766s] Aug 17 01:12:05.982: INFO: Created: latency-svc-sghgt Aug 17 01:12:05.997: INFO: Got endpoints: latency-svc-sghgt [1.644140107s] Aug 17 01:12:06.027: INFO: Created: latency-svc-tmxzx Aug 17 01:12:06.102: INFO: Got endpoints: latency-svc-tmxzx [1.714965208s] Aug 17 01:12:06.134: INFO: Created: latency-svc-bz9tt Aug 17 01:12:06.159: INFO: Got endpoints: latency-svc-bz9tt [1.7270342s] Aug 17 01:12:06.194: INFO: Created: latency-svc-xd5j8 Aug 17 01:12:06.288: INFO: Got endpoints: latency-svc-xd5j8 [1.781235933s] Aug 17 01:12:06.291: INFO: Created: latency-svc-kh2l8 Aug 17 01:12:06.299: INFO: Got endpoints: latency-svc-kh2l8 [1.447049414s] Aug 17 01:12:06.360: INFO: Created: latency-svc-gp9js Aug 17 01:12:06.378: INFO: Got endpoints: latency-svc-gp9js [1.510574931s] Aug 17 01:12:06.468: INFO: Created: latency-svc-tz4tr Aug 17 01:12:06.470: INFO: Got endpoints: latency-svc-tz4tr [1.434722876s] Aug 17 01:12:06.470: INFO: Latencies: [154.427848ms 199.556727ms 239.445249ms 295.558881ms 361.593696ms 440.860118ms 500.592963ms 633.419899ms 684.931062ms 783.482653ms 787.166001ms 788.395532ms 816.947923ms 826.533117ms 855.485926ms 862.131308ms 871.086944ms 872.767813ms 873.113727ms 880.460015ms 881.534626ms 893.063825ms 898.665795ms 909.316849ms 915.114375ms 921.36552ms 921.621791ms 928.123741ms 929.803309ms 934.338281ms 936.216616ms 939.356508ms 944.360223ms 944.486802ms 950.866639ms 954.248183ms 958.923617ms 961.493972ms 962.871704ms 970.151776ms 972.725373ms 983.435667ms 985.53469ms 988.931135ms 1.002536319s 1.003944737s 1.009984269s 1.009990854s 1.011530005s 1.017926683s 1.026675557s 1.027944578s 1.034107895s 1.040877476s 1.041023665s 1.041908331s 1.050217035s 1.052195097s 1.058610171s 1.07934469s 1.106209694s 1.107463196s 1.107712928s 1.121800844s 1.121836873s 1.1419811s 1.147144579s 1.147698843s 1.148553072s 1.15560191s 1.187024663s 1.187200907s 1.192852446s 1.198453433s 1.204465276s 1.215245154s 1.221234982s 1.223144945s 1.310911928s 1.311674869s 1.338198198s 1.345198587s 1.352236054s 1.354995323s 1.358202837s 1.360135835s 1.388030594s 1.388561854s 1.391258831s 1.403487826s 1.417207634s 1.432908494s 1.434722876s 1.440585468s 1.441721049s 1.447049414s 1.4483851s 1.449000429s 1.45352867s 1.45800094s 1.462962509s 1.473227544s 1.474561124s 1.487386874s 1.492551884s 1.494382803s 1.497554564s 1.503181674s 1.509402282s 1.510574931s 1.519584522s 1.522283799s 1.5424229s 1.544038349s 1.558150847s 1.569314474s 1.593724995s 1.599382266s 1.600253864s 1.620227173s 1.623414644s 1.635814229s 1.644140107s 1.647685991s 1.652250094s 1.656708114s 1.660626459s 1.662223432s 1.665067766s 1.666151601s 1.666661517s 1.689608174s 1.690148981s 1.707004399s 1.711224094s 1.714965208s 1.718017888s 1.7270342s 1.734694047s 1.749328776s 1.754070718s 1.768098525s 1.781235933s 1.785614654s 1.789018624s 1.795577509s 1.800371928s 1.822326867s 1.824478703s 1.825130867s 1.826143332s 1.833016565s 1.840779185s 1.866062445s 1.8661592s 1.87005711s 1.877474792s 1.882677874s 1.900836649s 1.906064617s 1.911899813s 1.921559255s 1.941339007s 1.957437436s 1.968448002s 1.984528077s 1.999919359s 2.064915469s 2.06533508s 2.076977769s 2.09466159s 2.100462185s 2.110658636s 2.116096463s 2.132202843s 2.135707169s 2.142216626s 2.148045992s 2.163067112s 2.170616843s 2.186732231s 2.245128715s 2.256186644s 2.299013603s 2.304680563s 2.406933669s 2.441575474s 2.463611311s 2.516931392s 2.56924629s 2.57655002s 2.610201328s 2.647245301s 2.743029531s 2.786628187s 2.832762162s 2.852372812s 2.929433782s 2.985148702s 3.044160587s] Aug 17 01:12:06.471: INFO: 50 %ile: 1.462962509s Aug 17 01:12:06.471: INFO: 90 %ile: 2.186732231s Aug 17 01:12:06.471: INFO: 99 %ile: 2.985148702s Aug 17 01:12:06.471: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:12:06.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2252" for this suite. • [SLOW TEST:25.727 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":294,"completed":283,"skipped":4736,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:12:06.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 01:12:06.627: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:12:23.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2220" for this suite. • [SLOW TEST:19.223 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":294,"completed":284,"skipped":4754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:12:25.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 17 01:12:43.413: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 01:12:43.432: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 01:12:45.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 01:12:45.435: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 01:12:47.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 01:12:47.481: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 01:12:49.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 01:12:49.446: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 01:12:51.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 01:12:51.504: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:12:51.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7996" for this suite. • [SLOW TEST:25.939 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":294,"completed":285,"skipped":4791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:12:51.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:12:51.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53" in namespace "downward-api-8565" to be "Succeeded or Failed" Aug 17 01:12:51.895: INFO: Pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53": Phase="Pending", Reason="", readiness=false. Elapsed: 45.835587ms Aug 17 01:12:54.561: INFO: Pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711683361s Aug 17 01:12:56.601: INFO: Pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.751099019s Aug 17 01:12:58.661: INFO: Pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.811056368s STEP: Saw pod success Aug 17 01:12:58.661: INFO: Pod "downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53" satisfied condition "Succeeded or Failed" Aug 17 01:12:58.674: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53 container client-container: STEP: delete the pod Aug 17 01:12:58.819: INFO: Waiting for pod downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53 to disappear Aug 17 01:12:59.145: INFO: Pod downwardapi-volume-960abfbf-03d5-4f48-9799-8f10a759cf53 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:12:59.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8565" for this suite. • [SLOW TEST:7.676 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":286,"skipped":4823,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:12:59.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-4088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4088 to expose endpoints map[] Aug 17 01:12:59.537: INFO: successfully validated that service multi-endpoint-test in namespace services-4088 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4088 to expose endpoints map[pod1:[100]] Aug 17 01:13:04.019: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Aug 17 01:13:05.045: INFO: successfully validated that service multi-endpoint-test in namespace services-4088 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-4088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4088 to expose endpoints map[pod1:[100] pod2:[101]] Aug 17 01:13:09.470: INFO: Unexpected endpoints: found map[75734149-32f2-43e3-963f-16fcec73f4d5:[100]], expected map[pod1:[100] pod2:[101]], will retry Aug 17 01:13:11.469: INFO: successfully validated that service multi-endpoint-test in namespace services-4088 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-4088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4088 to expose endpoints map[pod2:[101]] Aug 17 01:13:12.138: INFO: successfully validated that service multi-endpoint-test in namespace services-4088 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-4088 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4088 to expose endpoints map[] Aug 17 01:13:12.445: INFO: successfully validated that service multi-endpoint-test in namespace services-4088 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:13:13.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4088" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:14.458 seconds] [sig-network] Services /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":294,"completed":287,"skipped":4825,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:13:13.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-d4v2 STEP: Creating a pod to test atomic-volume-subpath Aug 17 01:13:14.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d4v2" in namespace "subpath-8220" to be "Succeeded or Failed" Aug 17 01:13:14.548: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Pending", Reason="", readiness=false. Elapsed: 33.07183ms Aug 17 01:13:16.618: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103707101s Aug 17 01:13:18.673: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158953399s Aug 17 01:13:20.684: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 6.169426931s Aug 17 01:13:22.901: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 8.38694118s Aug 17 01:13:24.906: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 10.391377124s Aug 17 01:13:26.913: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 12.398038024s Aug 17 01:13:28.917: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 14.40225107s Aug 17 01:13:30.921: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 16.406566602s Aug 17 01:13:32.925: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 18.410481224s Aug 17 01:13:34.929: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 20.414674948s Aug 17 01:13:36.934: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 22.419605432s Aug 17 01:13:38.938: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Running", Reason="", readiness=true. Elapsed: 24.423861407s Aug 17 01:13:40.943: INFO: Pod "pod-subpath-test-downwardapi-d4v2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.428281384s STEP: Saw pod success Aug 17 01:13:40.943: INFO: Pod "pod-subpath-test-downwardapi-d4v2" satisfied condition "Succeeded or Failed" Aug 17 01:13:40.966: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-d4v2 container test-container-subpath-downwardapi-d4v2: STEP: delete the pod Aug 17 01:13:40.998: INFO: Waiting for pod pod-subpath-test-downwardapi-d4v2 to disappear Aug 17 01:13:41.009: INFO: Pod pod-subpath-test-downwardapi-d4v2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-d4v2 Aug 17 01:13:41.009: INFO: Deleting pod "pod-subpath-test-downwardapi-d4v2" in namespace "subpath-8220" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:13:41.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8220" for this suite. • [SLOW TEST:27.252 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":294,"completed":288,"skipped":4829,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:13:41.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:14:41.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-942" for this suite. • [SLOW TEST:60.252 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":294,"completed":289,"skipped":4838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:14:41.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 01:14:41.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2" in namespace "projected-5088" to be "Succeeded or Failed" Aug 17 01:14:41.453: INFO: Pod "downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.425741ms Aug 17 01:14:43.663: INFO: Pod "downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260156097s Aug 17 01:14:45.667: INFO: Pod "downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.263973393s STEP: Saw pod success Aug 17 01:14:45.667: INFO: Pod "downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2" satisfied condition "Succeeded or Failed" Aug 17 01:14:45.669: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2 container client-container: STEP: delete the pod Aug 17 01:14:45.704: INFO: Waiting for pod downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2 to disappear Aug 17 01:14:45.726: INFO: Pod downwardapi-volume-b831ce67-ee3b-4bf2-81e5-84a0f4a8c2b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:14:45.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5088" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":290,"skipped":4878,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:14:45.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 17 01:14:46.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f -' Aug 17 01:14:52.628: INFO: stderr: "" Aug 17 01:14:52.628: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 17 01:14:52.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config diff -f -' Aug 17 01:14:53.112: INFO: rc: 1 Aug 17 01:14:53.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete -f -' Aug 17 01:14:53.218: INFO: stderr: "" Aug 17 01:14:53.218: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:14:53.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8138" for this suite. • [SLOW TEST:7.490 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:883 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":294,"completed":291,"skipped":4885,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:14:53.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-6a5a7a90-76c1-463d-8671-04e172d94914 STEP: Creating a pod to test consume secrets Aug 17 01:14:53.287: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57" in namespace "projected-259" to be "Succeeded or Failed" Aug 17 01:14:53.303: INFO: Pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57": Phase="Pending", Reason="", readiness=false. Elapsed: 15.432456ms Aug 17 01:14:55.307: INFO: Pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019273557s Aug 17 01:14:57.311: INFO: Pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023806197s Aug 17 01:14:59.314: INFO: Pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026631531s STEP: Saw pod success Aug 17 01:14:59.314: INFO: Pod "pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57" satisfied condition "Succeeded or Failed" Aug 17 01:14:59.316: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57 container secret-volume-test: STEP: delete the pod Aug 17 01:14:59.353: INFO: Waiting for pod pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57 to disappear Aug 17 01:14:59.360: INFO: Pod pod-projected-secrets-1a82ce95-c4e0-489c-84cf-be6fa6c32e57 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:14:59.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-259" for this suite. • [SLOW TEST:6.141 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":292,"skipped":4897,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:14:59.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:15:17.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8567" for this suite. • [SLOW TEST:17.689 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":294,"completed":293,"skipped":4911,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 01:15:17.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-vldz STEP: Creating a pod to test atomic-volume-subpath Aug 17 01:15:17.283: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vldz" in namespace "subpath-5601" to be "Succeeded or Failed" Aug 17 01:15:17.405: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Pending", Reason="", readiness=false. Elapsed: 122.047422ms Aug 17 01:15:19.470: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187206686s Aug 17 01:15:21.474: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 4.190958236s Aug 17 01:15:23.478: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 6.194768772s Aug 17 01:15:25.482: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 8.198694998s Aug 17 01:15:27.485: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 10.202118732s Aug 17 01:15:29.489: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 12.206576383s Aug 17 01:15:31.494: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 14.211069496s Aug 17 01:15:33.506: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 16.223498006s Aug 17 01:15:35.510: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 18.227490476s Aug 17 01:15:37.515: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 20.231614277s Aug 17 01:15:39.518: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 22.235549707s Aug 17 01:15:41.555: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Running", Reason="", readiness=true. Elapsed: 24.271959091s Aug 17 01:15:43.558: INFO: Pod "pod-subpath-test-configmap-vldz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.275000457s STEP: Saw pod success Aug 17 01:15:43.558: INFO: Pod "pod-subpath-test-configmap-vldz" satisfied condition "Succeeded or Failed" Aug 17 01:15:43.674: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-vldz container test-container-subpath-configmap-vldz: STEP: delete the pod Aug 17 01:15:43.763: INFO: Waiting for pod pod-subpath-test-configmap-vldz to disappear Aug 17 01:15:43.806: INFO: Pod pod-subpath-test-configmap-vldz no longer exists STEP: Deleting pod pod-subpath-test-configmap-vldz Aug 17 01:15:43.806: INFO: Deleting pod "pod-subpath-test-configmap-vldz" in namespace "subpath-5601" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 01:15:43.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5601" for this suite. • [SLOW TEST:26.759 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-beta.2.880+82baa26905c943/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":294,"completed":294,"skipped":4912,"failed":0} SSSSSSSSAug 17 01:15:43.814: INFO: Running AfterSuite actions on all nodes Aug 17 01:15:43.814: INFO: Running AfterSuite actions on node 1 Aug 17 01:15:43.814: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":294,"completed":294,"skipped":4920,"failed":0} Ran 294 of 5214 Specs in 6911.201 seconds SUCCESS! -- 294 Passed | 0 Failed | 0 Pending | 4920 Skipped PASS