I0422 23:37:34.327530 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0422 23:37:34.327706 7 e2e.go:124] Starting e2e run "121f9a33-06cb-45d7-aa8f-f9a1efa75554" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587598653 - Will randomize all specs Will run 275 of 4992 specs Apr 22 23:37:34.382: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:37:34.388: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 23:37:34.413: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:37:34.454: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:37:34.454: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 22 23:37:34.454: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 23:37:34.469: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 22 23:37:34.469: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 23:37:34.469: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 22 23:37:34.470: INFO: kube-apiserver version: v1.17.0 Apr 22 23:37:34.470: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:37:34.475: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:37:34.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 22 23:37:34.565: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:37:34.572: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3c292cc4-42a9-43ce-a4a6-ed7aa3eafcfd" in namespace "security-context-test-4561" to be "Succeeded or Failed" Apr 22 23:37:34.581: INFO: Pod "alpine-nnp-false-3c292cc4-42a9-43ce-a4a6-ed7aa3eafcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697352ms Apr 22 23:37:36.585: INFO: Pod "alpine-nnp-false-3c292cc4-42a9-43ce-a4a6-ed7aa3eafcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013389781s Apr 22 23:37:38.589: INFO: Pod "alpine-nnp-false-3c292cc4-42a9-43ce-a4a6-ed7aa3eafcfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017582014s Apr 22 23:37:38.590: INFO: Pod "alpine-nnp-false-3c292cc4-42a9-43ce-a4a6-ed7aa3eafcfd" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:37:38.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4561" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":34,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:37:38.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 22 23:37:42.718: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-355 PodName:pod-sharedvolume-33e4bc58-2edd-4fba-8936-3c0da54538d5 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 23:37:42.718: INFO: >>> kubeConfig: /root/.kube/config I0422 23:37:42.758951 7 log.go:172] (0xc0026b7600) (0xc001acd2c0) Create stream I0422 23:37:42.758982 7 log.go:172] (0xc0026b7600) (0xc001acd2c0) Stream added, broadcasting: 1 I0422 23:37:42.762467 7 log.go:172] (0xc0026b7600) Reply frame received for 1 I0422 23:37:42.762572 7 log.go:172] (0xc0026b7600) (0xc001752140) Create stream I0422 23:37:42.762612 7 log.go:172] (0xc0026b7600) (0xc001752140) Stream added, broadcasting: 3 I0422 23:37:42.763562 7 log.go:172] (0xc0026b7600) Reply frame received for 3 I0422 23:37:42.763591 7 log.go:172] (0xc0026b7600) (0xc001acd400) Create stream I0422 23:37:42.763603 7 log.go:172] (0xc0026b7600) (0xc001acd400) Stream added, broadcasting: 5 I0422 23:37:42.764561 7 log.go:172] (0xc0026b7600) Reply frame received for 5 I0422 23:37:42.844928 7 log.go:172] (0xc0026b7600) Data frame received for 5 I0422 23:37:42.844981 7 log.go:172] (0xc001acd400) (5) Data frame handling I0422 23:37:42.845032 7 log.go:172] (0xc0026b7600) Data frame received for 3 I0422 23:37:42.845060 7 log.go:172] (0xc001752140) (3) Data frame handling I0422 23:37:42.845102 7 log.go:172] (0xc001752140) (3) Data frame sent I0422 23:37:42.845335 7 log.go:172] (0xc0026b7600) Data frame received for 3 I0422 23:37:42.845365 7 log.go:172] (0xc001752140) (3) Data frame handling I0422 23:37:42.846815 7 log.go:172] (0xc0026b7600) Data frame received for 1 I0422 23:37:42.846839 7 log.go:172] (0xc001acd2c0) (1) Data frame handling I0422 23:37:42.846857 7 log.go:172] (0xc001acd2c0) (1) Data frame sent I0422 23:37:42.846873 7 log.go:172] (0xc0026b7600) (0xc001acd2c0) Stream removed, broadcasting: 1 I0422 23:37:42.846890 7 log.go:172] (0xc0026b7600) Go away received I0422 23:37:42.847537 7 log.go:172] (0xc0026b7600) (0xc001acd2c0) Stream removed, broadcasting: 1 I0422 23:37:42.847556 7 log.go:172] (0xc0026b7600) (0xc001752140) Stream removed, broadcasting: 3 I0422 23:37:42.847565 7 log.go:172] (0xc0026b7600) (0xc001acd400) Stream removed, broadcasting: 5 Apr 22 23:37:42.847: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:37:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-355" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":2,"skipped":35,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:37:42.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-66e22e1d-dc76-4a2d-8187-469c2d5b2589 in namespace container-probe-1206 Apr 22 23:37:46.947: INFO: Started pod busybox-66e22e1d-dc76-4a2d-8187-469c2d5b2589 in namespace container-probe-1206 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:37:46.950: INFO: Initial restart count of pod busybox-66e22e1d-dc76-4a2d-8187-469c2d5b2589 is 0 Apr 22 23:38:39.077: INFO: Restart count of pod container-probe-1206/busybox-66e22e1d-dc76-4a2d-8187-469c2d5b2589 is now 1 (52.12743926s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:38:39.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1206" for this suite. • [SLOW TEST:56.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":42,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:38:39.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 22 23:38:44.345: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:38:44.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5104" for this suite. • [SLOW TEST:5.272 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":4,"skipped":56,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:38:44.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-mxhh STEP: Creating a pod to test atomic-volume-subpath Apr 22 23:38:44.566: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mxhh" in namespace "subpath-3045" to be "Succeeded or Failed" Apr 22 23:38:44.603: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.3209ms Apr 22 23:38:46.644: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078622199s Apr 22 23:38:48.662: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 4.096308296s Apr 22 23:38:50.674: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 6.108632264s Apr 22 23:38:52.678: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 8.112438834s Apr 22 23:38:54.681: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 10.115683178s Apr 22 23:38:56.695: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 12.129177317s Apr 22 23:38:58.703: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 14.136995797s Apr 22 23:39:00.707: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 16.141363369s Apr 22 23:39:02.711: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 18.144860107s Apr 22 23:39:04.715: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 20.148904906s Apr 22 23:39:06.718: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Running", Reason="", readiness=true. Elapsed: 22.152694928s Apr 22 23:39:08.722: INFO: Pod "pod-subpath-test-secret-mxhh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.15585488s STEP: Saw pod success Apr 22 23:39:08.722: INFO: Pod "pod-subpath-test-secret-mxhh" satisfied condition "Succeeded or Failed" Apr 22 23:39:08.724: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-mxhh container test-container-subpath-secret-mxhh: STEP: delete the pod Apr 22 23:39:08.753: INFO: Waiting for pod pod-subpath-test-secret-mxhh to disappear Apr 22 23:39:08.758: INFO: Pod pod-subpath-test-secret-mxhh no longer exists STEP: Deleting pod pod-subpath-test-secret-mxhh Apr 22 23:39:08.758: INFO: Deleting pod "pod-subpath-test-secret-mxhh" in namespace "subpath-3045" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:08.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3045" for this suite. • [SLOW TEST:24.334 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":5,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:08.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1674 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 23:39:08.814: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:39:08.891: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:39:10.902: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:39:12.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:14.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:16.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:18.897: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:20.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:22.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:24.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:26.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:28.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:39:30.895: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 23:39:30.906: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 23:39:34.991: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1674 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 23:39:34.991: INFO: >>> kubeConfig: /root/.kube/config I0422 23:39:35.014042 7 log.go:172] (0xc002d50370) (0xc001f42320) Create stream I0422 23:39:35.014066 7 log.go:172] (0xc002d50370) (0xc001f42320) Stream added, broadcasting: 1 I0422 23:39:35.015519 7 log.go:172] (0xc002d50370) Reply frame received for 1 I0422 23:39:35.015543 7 log.go:172] (0xc002d50370) (0xc001f423c0) Create stream I0422 23:39:35.015551 7 log.go:172] (0xc002d50370) (0xc001f423c0) Stream added, broadcasting: 3 I0422 23:39:35.016234 7 log.go:172] (0xc002d50370) Reply frame received for 3 I0422 23:39:35.016263 7 log.go:172] (0xc002d50370) (0xc002e21860) Create stream I0422 23:39:35.016274 7 log.go:172] (0xc002d50370) (0xc002e21860) Stream added, broadcasting: 5 I0422 23:39:35.017016 7 log.go:172] (0xc002d50370) Reply frame received for 5 I0422 23:39:36.082643 7 log.go:172] (0xc002d50370) Data frame received for 5 I0422 23:39:36.082730 7 log.go:172] (0xc002e21860) (5) Data frame handling I0422 23:39:36.082773 7 log.go:172] (0xc002d50370) Data frame received for 3 I0422 23:39:36.082851 7 log.go:172] (0xc001f423c0) (3) Data frame handling I0422 23:39:36.082901 7 log.go:172] (0xc001f423c0) (3) Data frame sent I0422 23:39:36.082932 7 log.go:172] (0xc002d50370) Data frame received for 3 I0422 23:39:36.082949 7 log.go:172] (0xc001f423c0) (3) Data frame handling I0422 23:39:36.084850 7 log.go:172] (0xc002d50370) Data frame received for 1 I0422 23:39:36.084879 7 log.go:172] (0xc001f42320) (1) Data frame handling I0422 23:39:36.084900 7 log.go:172] (0xc001f42320) (1) Data frame sent I0422 23:39:36.084924 7 log.go:172] (0xc002d50370) (0xc001f42320) Stream removed, broadcasting: 1 I0422 23:39:36.085018 7 log.go:172] (0xc002d50370) Go away received I0422 23:39:36.085069 7 log.go:172] (0xc002d50370) (0xc001f42320) Stream removed, broadcasting: 1 I0422 23:39:36.085107 7 log.go:172] (0xc002d50370) (0xc001f423c0) Stream removed, broadcasting: 3 I0422 23:39:36.085342 7 log.go:172] (0xc002d50370) (0xc002e21860) Stream removed, broadcasting: 5 Apr 22 23:39:36.085: INFO: Found all expected endpoints: [netserver-0] Apr 22 23:39:36.089: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.174 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1674 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 23:39:36.089: INFO: >>> kubeConfig: /root/.kube/config I0422 23:39:36.124912 7 log.go:172] (0xc0026b6420) (0xc001e56aa0) Create stream I0422 23:39:36.124957 7 log.go:172] (0xc0026b6420) (0xc001e56aa0) Stream added, broadcasting: 1 I0422 23:39:36.127738 7 log.go:172] (0xc0026b6420) Reply frame received for 1 I0422 23:39:36.127777 7 log.go:172] (0xc0026b6420) (0xc002e21900) Create stream I0422 23:39:36.127792 7 log.go:172] (0xc0026b6420) (0xc002e21900) Stream added, broadcasting: 3 I0422 23:39:36.128684 7 log.go:172] (0xc0026b6420) Reply frame received for 3 I0422 23:39:36.128727 7 log.go:172] (0xc0026b6420) (0xc002e219a0) Create stream I0422 23:39:36.128740 7 log.go:172] (0xc0026b6420) (0xc002e219a0) Stream added, broadcasting: 5 I0422 23:39:36.129611 7 log.go:172] (0xc0026b6420) Reply frame received for 5 I0422 23:39:37.217332 7 log.go:172] (0xc0026b6420) Data frame received for 3 I0422 23:39:37.217370 7 log.go:172] (0xc002e21900) (3) Data frame handling I0422 23:39:37.217393 7 log.go:172] (0xc002e21900) (3) Data frame sent I0422 23:39:37.217403 7 log.go:172] (0xc0026b6420) Data frame received for 3 I0422 23:39:37.217408 7 log.go:172] (0xc002e21900) (3) Data frame handling I0422 23:39:37.217481 7 log.go:172] (0xc0026b6420) Data frame received for 5 I0422 23:39:37.217489 7 log.go:172] (0xc002e219a0) (5) Data frame handling I0422 23:39:37.219293 7 log.go:172] (0xc0026b6420) Data frame received for 1 I0422 23:39:37.219316 7 log.go:172] (0xc001e56aa0) (1) Data frame handling I0422 23:39:37.219330 7 log.go:172] (0xc001e56aa0) (1) Data frame sent I0422 23:39:37.219339 7 log.go:172] (0xc0026b6420) (0xc001e56aa0) Stream removed, broadcasting: 1 I0422 23:39:37.219356 7 log.go:172] (0xc0026b6420) Go away received I0422 23:39:37.219562 7 log.go:172] (0xc0026b6420) (0xc001e56aa0) Stream removed, broadcasting: 1 I0422 23:39:37.219608 7 log.go:172] (0xc0026b6420) (0xc002e21900) Stream removed, broadcasting: 3 I0422 23:39:37.219639 7 log.go:172] (0xc0026b6420) (0xc002e219a0) Stream removed, broadcasting: 5 Apr 22 23:39:37.219: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1674" for this suite. • [SLOW TEST:28.463 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:37.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 22 23:39:37.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968" in namespace "downward-api-2997" to be "Succeeded or Failed" Apr 22 23:39:37.314: INFO: Pod "downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968": Phase="Pending", Reason="", readiness=false. Elapsed: 13.782855ms Apr 22 23:39:39.318: INFO: Pod "downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017781489s Apr 22 23:39:41.322: INFO: Pod "downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022168268s STEP: Saw pod success Apr 22 23:39:41.322: INFO: Pod "downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968" satisfied condition "Succeeded or Failed" Apr 22 23:39:41.326: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968 container client-container: STEP: delete the pod Apr 22 23:39:41.377: INFO: Waiting for pod downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968 to disappear Apr 22 23:39:41.411: INFO: Pod downwardapi-volume-88edf908-0800-4fe1-99a7-cd0ac5035968 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:41.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2997" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":112,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:41.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 22 23:39:41.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4" in namespace "projected-3159" to be "Succeeded or Failed" Apr 22 23:39:41.489: INFO: Pod "downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.483911ms Apr 22 23:39:43.543: INFO: Pod "downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057646709s Apr 22 23:39:45.547: INFO: Pod "downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06117572s STEP: Saw pod success Apr 22 23:39:45.547: INFO: Pod "downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4" satisfied condition "Succeeded or Failed" Apr 22 23:39:45.550: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4 container client-container: STEP: delete the pod Apr 22 23:39:45.683: INFO: Waiting for pod downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4 to disappear Apr 22 23:39:45.693: INFO: Pod downwardapi-volume-9c00d553-8c92-4869-b597-3efdbb7785b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:45.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3159" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":115,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:39:45.756: INFO: Creating deployment "test-recreate-deployment" Apr 22 23:39:45.771: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 22 23:39:45.797: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 22 23:39:47.805: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 22 23:39:47.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195585, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195585, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195585, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195585, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 23:39:49.812: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 22 23:39:49.819: INFO: Updating deployment test-recreate-deployment Apr 22 23:39:49.819: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 22 23:39:50.279: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8453 /apis/apps/v1/namespaces/deployment-8453/deployments/test-recreate-deployment 9e7037c3-b26e-40b4-8a1b-a4a8bea80b0d 10246217 2 2020-04-22 23:39:45 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022ed388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-22 23:39:49 +0000 UTC,LastTransitionTime:2020-04-22 23:39:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-22 23:39:50 +0000 UTC,LastTransitionTime:2020-04-22 23:39:45 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 23:39:50.322: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8453 /apis/apps/v1/namespaces/deployment-8453/replicasets/test-recreate-deployment-5f94c574ff c1bcbaea-391f-4609-93e1-9cf7dd489bef 10246215 1 2020-04-22 23:39:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9e7037c3-b26e-40b4-8a1b-a4a8bea80b0d 0xc0008d8f37 0xc0008d8f38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008d8f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 23:39:50.322: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 22 23:39:50.322: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-8453 /apis/apps/v1/namespaces/deployment-8453/replicasets/test-recreate-deployment-846c7dd955 b7c8be67-cd2b-4b2f-994a-dbf91e3c7a6c 10246205 2 2020-04-22 23:39:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9e7037c3-b26e-40b4-8a1b-a4a8bea80b0d 0xc0008d9007 0xc0008d9008}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008d9078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 23:39:50.327: INFO: Pod "test-recreate-deployment-5f94c574ff-9n8t9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-9n8t9 test-recreate-deployment-5f94c574ff- deployment-8453 /api/v1/namespaces/deployment-8453/pods/test-recreate-deployment-5f94c574ff-9n8t9 a4c04b42-957d-4a1e-95a3-35eb2126a6c0 10246216 0 2020-04-22 23:39:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c1bcbaea-391f-4609-93e1-9cf7dd489bef 0xc0022edb67 0xc0022edb68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jbv9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jbv9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jbv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:39:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:39:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:39:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:39:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-22 23:39:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:50.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8453" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":9,"skipped":130,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:50.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-5210/configmap-test-b5c4f04b-9cfc-4a18-a325-e77e8c23328e STEP: Creating a pod to test consume configMaps Apr 22 23:39:50.484: INFO: Waiting up to 5m0s for pod "pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40" in namespace "configmap-5210" to be "Succeeded or Failed" Apr 22 23:39:50.494: INFO: Pod "pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020276ms Apr 22 23:39:52.498: INFO: Pod "pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013951922s Apr 22 23:39:54.502: INFO: Pod "pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017483389s STEP: Saw pod success Apr 22 23:39:54.502: INFO: Pod "pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40" satisfied condition "Succeeded or Failed" Apr 22 23:39:54.504: INFO: Trying to get logs from node latest-worker pod pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40 container env-test: STEP: delete the pod Apr 22 23:39:54.653: INFO: Waiting for pod pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40 to disappear Apr 22 23:39:54.681: INFO: Pod pod-configmaps-05a3f679-4bdc-47e8-9874-deec5bcf5e40 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:54.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5210" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:54.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 22 23:39:54.840: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8505 /api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-resource-version ae9e0a34-157a-4e4f-b717-a7f72443f424 10246268 0 2020-04-22 23:39:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 23:39:54.840: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8505 /api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-resource-version ae9e0a34-157a-4e4f-b717-a7f72443f424 10246269 0 2020-04-22 23:39:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:39:54.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8505" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":11,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:39:54.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:05.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1283" for this suite. • [SLOW TEST:11.127 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":12,"skipped":185,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:05.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:40:06.108: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 7.057927ms) Apr 22 23:40:06.111: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.756545ms) Apr 22 23:40:06.115: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.811089ms) Apr 22 23:40:06.119: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.365256ms) Apr 22 23:40:06.122: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.484833ms) Apr 22 23:40:06.126: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.450151ms) Apr 22 23:40:06.129: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.280439ms) Apr 22 23:40:06.132: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.859345ms) Apr 22 23:40:06.135: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.765452ms) Apr 22 23:40:06.155: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 19.791888ms) Apr 22 23:40:06.159: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.26299ms) Apr 22 23:40:06.162: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.345078ms) Apr 22 23:40:06.166: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.468805ms) Apr 22 23:40:06.169: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.804109ms) Apr 22 23:40:06.171: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.813988ms) Apr 22 23:40:06.175: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.288925ms) Apr 22 23:40:06.178: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.023003ms) Apr 22 23:40:06.181: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.375142ms) Apr 22 23:40:06.184: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.064203ms) Apr 22 23:40:06.188: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.625748ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:06.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5573" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":13,"skipped":186,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:06.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 22 23:40:06.245: INFO: Waiting up to 5m0s for pod "pod-042f4e3a-c89b-416c-bffc-77e4a85fb593" in namespace "emptydir-9768" to be "Succeeded or Failed" Apr 22 23:40:06.249: INFO: Pod "pod-042f4e3a-c89b-416c-bffc-77e4a85fb593": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666861ms Apr 22 23:40:08.253: INFO: Pod "pod-042f4e3a-c89b-416c-bffc-77e4a85fb593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007885701s Apr 22 23:40:10.257: INFO: Pod "pod-042f4e3a-c89b-416c-bffc-77e4a85fb593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011612724s STEP: Saw pod success Apr 22 23:40:10.257: INFO: Pod "pod-042f4e3a-c89b-416c-bffc-77e4a85fb593" satisfied condition "Succeeded or Failed" Apr 22 23:40:10.260: INFO: Trying to get logs from node latest-worker2 pod pod-042f4e3a-c89b-416c-bffc-77e4a85fb593 container test-container: STEP: delete the pod Apr 22 23:40:10.299: INFO: Waiting for pod pod-042f4e3a-c89b-416c-bffc-77e4a85fb593 to disappear Apr 22 23:40:10.315: INFO: Pod pod-042f4e3a-c89b-416c-bffc-77e4a85fb593 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:10.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9768" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":201,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:10.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:10.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1333" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":15,"skipped":206,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:10.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 22 23:40:10.496: INFO: Waiting up to 5m0s for pod "downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35" in namespace "downward-api-5333" to be "Succeeded or Failed" Apr 22 23:40:10.511: INFO: Pod "downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35": Phase="Pending", Reason="", readiness=false. Elapsed: 14.914969ms Apr 22 23:40:13.048: INFO: Pod "downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.551501351s Apr 22 23:40:15.052: INFO: Pod "downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.55583172s STEP: Saw pod success Apr 22 23:40:15.052: INFO: Pod "downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35" satisfied condition "Succeeded or Failed" Apr 22 23:40:15.055: INFO: Trying to get logs from node latest-worker2 pod downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35 container dapi-container: STEP: delete the pod Apr 22 23:40:15.090: INFO: Waiting for pod downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35 to disappear Apr 22 23:40:15.142: INFO: Pod downward-api-a9192972-a630-4c1e-b1b8-db3111f8fe35 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:15.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5333" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":208,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:15.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2872 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2872 Apr 22 23:40:15.292: INFO: Found 0 stateful pods, waiting for 1 Apr 22 23:40:25.298: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 22 23:40:25.317: INFO: Deleting all statefulset in ns statefulset-2872 Apr 22 23:40:25.324: INFO: Scaling statefulset ss to 0 Apr 22 23:40:45.387: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 23:40:45.390: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2872" for this suite. • [SLOW TEST:30.259 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":17,"skipped":223,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:45.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 22 23:40:45.504: INFO: Waiting up to 5m0s for pod "pod-a11fd850-0c18-4393-a670-daa10c5ef1bc" in namespace "emptydir-3759" to be "Succeeded or Failed" Apr 22 23:40:45.532: INFO: Pod "pod-a11fd850-0c18-4393-a670-daa10c5ef1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.61272ms Apr 22 23:40:47.536: INFO: Pod "pod-a11fd850-0c18-4393-a670-daa10c5ef1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032661263s Apr 22 23:40:49.540: INFO: Pod "pod-a11fd850-0c18-4393-a670-daa10c5ef1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036492585s STEP: Saw pod success Apr 22 23:40:49.540: INFO: Pod "pod-a11fd850-0c18-4393-a670-daa10c5ef1bc" satisfied condition "Succeeded or Failed" Apr 22 23:40:49.556: INFO: Trying to get logs from node latest-worker pod pod-a11fd850-0c18-4393-a670-daa10c5ef1bc container test-container: STEP: delete the pod Apr 22 23:40:49.588: INFO: Waiting for pod pod-a11fd850-0c18-4393-a670-daa10c5ef1bc to disappear Apr 22 23:40:49.604: INFO: Pod pod-a11fd850-0c18-4393-a670-daa10c5ef1bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:40:49.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3759" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":223,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:40:49.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-lmxb STEP: Creating a pod to test atomic-volume-subpath Apr 22 23:40:49.888: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lmxb" in namespace "subpath-2729" to be "Succeeded or Failed" Apr 22 23:40:49.892: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.952606ms Apr 22 23:40:51.896: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008118328s Apr 22 23:40:53.900: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 4.012367902s Apr 22 23:40:55.904: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 6.016131054s Apr 22 23:40:57.908: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 8.020351585s Apr 22 23:40:59.912: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 10.024530606s Apr 22 23:41:01.916: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 12.0287642s Apr 22 23:41:03.920: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 14.032697774s Apr 22 23:41:05.924: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 16.036650822s Apr 22 23:41:07.928: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 18.040823854s Apr 22 23:41:09.933: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 20.04530298s Apr 22 23:41:11.937: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Running", Reason="", readiness=true. Elapsed: 22.049926191s Apr 22 23:41:13.942: INFO: Pod "pod-subpath-test-configmap-lmxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054449826s STEP: Saw pod success Apr 22 23:41:13.942: INFO: Pod "pod-subpath-test-configmap-lmxb" satisfied condition "Succeeded or Failed" Apr 22 23:41:13.945: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-lmxb container test-container-subpath-configmap-lmxb: STEP: delete the pod Apr 22 23:41:13.979: INFO: Waiting for pod pod-subpath-test-configmap-lmxb to disappear Apr 22 23:41:14.005: INFO: Pod pod-subpath-test-configmap-lmxb no longer exists STEP: Deleting pod pod-subpath-test-configmap-lmxb Apr 22 23:41:14.005: INFO: Deleting pod "pod-subpath-test-configmap-lmxb" in namespace "subpath-2729" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:41:14.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2729" for this suite. • [SLOW TEST:24.404 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":19,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:41:14.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0422 23:41:25.677762 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 23:41:25.677: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:41:25.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-822" for this suite. • [SLOW TEST:11.670 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":20,"skipped":243,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:41:25.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-6db4a847-2c92-4797-bc6c-5de2917c65a8 STEP: Creating secret with name s-test-opt-upd-f08a3895-5814-4e14-aa94-510ef7dfc55f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6db4a847-2c92-4797-bc6c-5de2917c65a8 STEP: Updating secret s-test-opt-upd-f08a3895-5814-4e14-aa94-510ef7dfc55f STEP: Creating secret with name s-test-opt-create-0f6db4d1-3a34-46c7-a50a-d888548515df STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:42:38.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5028" for this suite. • [SLOW TEST:73.214 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":243,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:42:38.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-9688 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9688 to expose endpoints map[] Apr 22 23:42:39.055: INFO: Get endpoints failed (31.144049ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 22 23:42:40.057: INFO: successfully validated that service endpoint-test2 in namespace services-9688 exposes endpoints map[] (1.033950333s elapsed) STEP: Creating pod pod1 in namespace services-9688 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9688 to expose endpoints map[pod1:[80]] Apr 22 23:42:44.325: INFO: successfully validated that service endpoint-test2 in namespace services-9688 exposes endpoints map[pod1:[80]] (4.262425464s elapsed) STEP: Creating pod pod2 in namespace services-9688 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9688 to expose endpoints map[pod1:[80] pod2:[80]] Apr 22 23:42:48.553: INFO: successfully validated that service endpoint-test2 in namespace services-9688 exposes endpoints map[pod1:[80] pod2:[80]] (4.220554485s elapsed) STEP: Deleting pod pod1 in namespace services-9688 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9688 to expose endpoints map[pod2:[80]] Apr 22 23:42:49.622: INFO: successfully validated that service endpoint-test2 in namespace services-9688 exposes endpoints map[pod2:[80]] (1.064540399s elapsed) STEP: Deleting pod pod2 in namespace services-9688 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9688 to expose endpoints map[] Apr 22 23:42:50.640: INFO: successfully validated that service endpoint-test2 in namespace services-9688 exposes endpoints map[] (1.012730841s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:42:50.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9688" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.770 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":22,"skipped":245,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:42:50.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 22 23:42:50.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5604' Apr 22 23:42:53.095: INFO: stderr: "" Apr 22 23:42:53.095: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 22 23:42:53.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5604' Apr 22 23:42:57.344: INFO: stderr: "" Apr 22 23:42:57.344: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:42:57.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5604" for this suite. • [SLOW TEST:6.692 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":23,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:42:57.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3939 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 23:42:57.421: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:42:57.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:42:59.565: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:43:01.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:03.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:05.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:07.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:09.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:11.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:43:13.506: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 23:43:13.513: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 23:43:17.543: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.89:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3939 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 23:43:17.543: INFO: >>> kubeConfig: /root/.kube/config I0422 23:43:17.580648 7 log.go:172] (0xc002d502c0) (0xc00255f400) Create stream I0422 23:43:17.580688 7 log.go:172] (0xc002d502c0) (0xc00255f400) Stream added, broadcasting: 1 I0422 23:43:17.582933 7 log.go:172] (0xc002d502c0) Reply frame received for 1 I0422 23:43:17.582971 7 log.go:172] (0xc002d502c0) (0xc001782000) Create stream I0422 23:43:17.582985 7 log.go:172] (0xc002d502c0) (0xc001782000) Stream added, broadcasting: 3 I0422 23:43:17.584147 7 log.go:172] (0xc002d502c0) Reply frame received for 3 I0422 23:43:17.584199 7 log.go:172] (0xc002d502c0) (0xc00255f540) Create stream I0422 23:43:17.584216 7 log.go:172] (0xc002d502c0) (0xc00255f540) Stream added, broadcasting: 5 I0422 23:43:17.585296 7 log.go:172] (0xc002d502c0) Reply frame received for 5 I0422 23:43:17.651789 7 log.go:172] (0xc002d502c0) Data frame received for 3 I0422 23:43:17.651820 7 log.go:172] (0xc001782000) (3) Data frame handling I0422 23:43:17.651835 7 log.go:172] (0xc001782000) (3) Data frame sent I0422 23:43:17.651845 7 log.go:172] (0xc002d502c0) Data frame received for 3 I0422 23:43:17.651851 7 log.go:172] (0xc001782000) (3) Data frame handling I0422 23:43:17.651867 7 log.go:172] (0xc002d502c0) Data frame received for 5 I0422 23:43:17.651894 7 log.go:172] (0xc00255f540) (5) Data frame handling I0422 23:43:17.653748 7 log.go:172] (0xc002d502c0) Data frame received for 1 I0422 23:43:17.653771 7 log.go:172] (0xc00255f400) (1) Data frame handling I0422 23:43:17.653797 7 log.go:172] (0xc00255f400) (1) Data frame sent I0422 23:43:17.653947 7 log.go:172] (0xc002d502c0) (0xc00255f400) Stream removed, broadcasting: 1 I0422 23:43:17.653977 7 log.go:172] (0xc002d502c0) Go away received I0422 23:43:17.654012 7 log.go:172] (0xc002d502c0) (0xc00255f400) Stream removed, broadcasting: 1 I0422 23:43:17.654024 7 log.go:172] (0xc002d502c0) (0xc001782000) Stream removed, broadcasting: 3 I0422 23:43:17.654033 7 log.go:172] (0xc002d502c0) (0xc00255f540) Stream removed, broadcasting: 5 Apr 22 23:43:17.654: INFO: Found all expected endpoints: [netserver-0] Apr 22 23:43:17.657: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.186:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3939 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 22 23:43:17.657: INFO: >>> kubeConfig: /root/.kube/config I0422 23:43:17.688912 7 log.go:172] (0xc002d50840) (0xc00255f720) Create stream I0422 23:43:17.688946 7 log.go:172] (0xc002d50840) (0xc00255f720) Stream added, broadcasting: 1 I0422 23:43:17.705328 7 log.go:172] (0xc002d50840) Reply frame received for 1 I0422 23:43:17.705383 7 log.go:172] (0xc002d50840) (0xc002407220) Create stream I0422 23:43:17.705397 7 log.go:172] (0xc002d50840) (0xc002407220) Stream added, broadcasting: 3 I0422 23:43:17.706228 7 log.go:172] (0xc002d50840) Reply frame received for 3 I0422 23:43:17.706253 7 log.go:172] (0xc002d50840) (0xc00255f7c0) Create stream I0422 23:43:17.706263 7 log.go:172] (0xc002d50840) (0xc00255f7c0) Stream added, broadcasting: 5 I0422 23:43:17.706901 7 log.go:172] (0xc002d50840) Reply frame received for 5 I0422 23:43:17.769693 7 log.go:172] (0xc002d50840) Data frame received for 3 I0422 23:43:17.769726 7 log.go:172] (0xc002407220) (3) Data frame handling I0422 23:43:17.769752 7 log.go:172] (0xc002407220) (3) Data frame sent I0422 23:43:17.769928 7 log.go:172] (0xc002d50840) Data frame received for 5 I0422 23:43:17.769971 7 log.go:172] (0xc00255f7c0) (5) Data frame handling I0422 23:43:17.769997 7 log.go:172] (0xc002d50840) Data frame received for 3 I0422 23:43:17.770011 7 log.go:172] (0xc002407220) (3) Data frame handling I0422 23:43:17.771931 7 log.go:172] (0xc002d50840) Data frame received for 1 I0422 23:43:17.771949 7 log.go:172] (0xc00255f720) (1) Data frame handling I0422 23:43:17.771966 7 log.go:172] (0xc00255f720) (1) Data frame sent I0422 23:43:17.771978 7 log.go:172] (0xc002d50840) (0xc00255f720) Stream removed, broadcasting: 1 I0422 23:43:17.771992 7 log.go:172] (0xc002d50840) Go away received I0422 23:43:17.772123 7 log.go:172] (0xc002d50840) (0xc00255f720) Stream removed, broadcasting: 1 I0422 23:43:17.772157 7 log.go:172] (0xc002d50840) (0xc002407220) Stream removed, broadcasting: 3 I0422 23:43:17.772185 7 log.go:172] (0xc002d50840) (0xc00255f7c0) Stream removed, broadcasting: 5 Apr 22 23:43:17.772: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:17.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3939" for this suite. • [SLOW TEST:20.417 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:17.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 22 23:43:17.873: INFO: Waiting up to 5m0s for pod "downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62" in namespace "downward-api-4603" to be "Succeeded or Failed" Apr 22 23:43:17.922: INFO: Pod "downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62": Phase="Pending", Reason="", readiness=false. Elapsed: 49.150845ms Apr 22 23:43:19.926: INFO: Pod "downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05350207s Apr 22 23:43:21.948: INFO: Pod "downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075223605s STEP: Saw pod success Apr 22 23:43:21.948: INFO: Pod "downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62" satisfied condition "Succeeded or Failed" Apr 22 23:43:21.951: INFO: Trying to get logs from node latest-worker2 pod downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62 container dapi-container: STEP: delete the pod Apr 22 23:43:21.996: INFO: Waiting for pod downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62 to disappear Apr 22 23:43:22.008: INFO: Pod downward-api-0cebf64a-6d4c-43bd-9252-9876120b7b62 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:22.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4603" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":297,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:22.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 22 23:43:26.611: INFO: Successfully updated pod "annotationupdateb983bc0e-c566-4674-aa69-dff1a7805296" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:28.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5392" for this suite. • [SLOW TEST:6.671 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":307,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:28.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 22 23:43:28.742: INFO: Waiting up to 5m0s for pod "pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870" in namespace "emptydir-7607" to be "Succeeded or Failed" Apr 22 23:43:28.774: INFO: Pod "pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870": Phase="Pending", Reason="", readiness=false. Elapsed: 32.468882ms Apr 22 23:43:30.780: INFO: Pod "pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038265338s Apr 22 23:43:32.784: INFO: Pod "pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042619024s STEP: Saw pod success Apr 22 23:43:32.784: INFO: Pod "pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870" satisfied condition "Succeeded or Failed" Apr 22 23:43:32.788: INFO: Trying to get logs from node latest-worker2 pod pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870 container test-container: STEP: delete the pod Apr 22 23:43:32.818: INFO: Waiting for pod pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870 to disappear Apr 22 23:43:32.830: INFO: Pod pod-fa1b0b1d-922f-4f2b-9c61-b807193a4870 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:32.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7607" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":309,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:32.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 22 23:43:33.440: INFO: created pod pod-service-account-defaultsa Apr 22 23:43:33.440: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 22 23:43:33.452: INFO: created pod pod-service-account-mountsa Apr 22 23:43:33.452: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 22 23:43:33.501: INFO: created pod pod-service-account-nomountsa Apr 22 23:43:33.502: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 22 23:43:33.519: INFO: created pod pod-service-account-defaultsa-mountspec Apr 22 23:43:33.519: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 22 23:43:33.555: INFO: created pod pod-service-account-mountsa-mountspec Apr 22 23:43:33.555: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 22 23:43:33.592: INFO: created pod pod-service-account-nomountsa-mountspec Apr 22 23:43:33.592: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 22 23:43:33.625: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 22 23:43:33.625: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 22 23:43:33.653: INFO: created pod pod-service-account-mountsa-nomountspec Apr 22 23:43:33.653: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 22 23:43:33.696: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 22 23:43:33.696: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:33.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9373" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":28,"skipped":312,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:33.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-9546106d-8678-461f-83df-8ef4319462fb STEP: Creating configMap with name cm-test-opt-upd-117b285e-03d5-4ec9-9702-fdfa5f0a39af STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9546106d-8678-461f-83df-8ef4319462fb STEP: Updating configmap cm-test-opt-upd-117b285e-03d5-4ec9-9702-fdfa5f0a39af STEP: Creating configMap with name cm-test-opt-create-d6090cdf-cf61-44c5-a8a7-b6e40a425470 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:43:52.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6963" for this suite. • [SLOW TEST:18.213 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:43:52.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 23:43:52.465: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 23:43:54.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 23:43:56.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195832, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 23:43:59.746: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:43:59.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8408-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:44:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4637" for this suite. STEP: Destroying namespace "webhook-4637-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.922 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":30,"skipped":337,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:44:00.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-8019/secret-test-13b86d11-43bf-47e0-b17f-6caf3034a29c STEP: Creating a pod to test consume secrets Apr 22 23:44:01.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a" in namespace "secrets-8019" to be "Succeeded or Failed" Apr 22 23:44:01.032: INFO: Pod "pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.815593ms Apr 22 23:44:03.036: INFO: Pod "pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018539273s Apr 22 23:44:05.040: INFO: Pod "pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022356638s STEP: Saw pod success Apr 22 23:44:05.040: INFO: Pod "pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a" satisfied condition "Succeeded or Failed" Apr 22 23:44:05.043: INFO: Trying to get logs from node latest-worker pod pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a container env-test: STEP: delete the pod Apr 22 23:44:05.078: INFO: Waiting for pod pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a to disappear Apr 22 23:44:05.082: INFO: Pod pod-configmaps-eca28d82-fd34-4f82-8df2-60808ea77e7a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:44:05.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8019" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":337,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:44:05.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:44:05.165: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:44:06.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-848" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":32,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:44:06.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-a50ffd40-885f-4ced-88d7-51c31d4dd3c2 STEP: Creating configMap with name cm-test-opt-upd-778a6912-c8cc-42e0-972f-355af48478ce STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a50ffd40-885f-4ced-88d7-51c31d4dd3c2 STEP: Updating configmap cm-test-opt-upd-778a6912-c8cc-42e0-972f-355af48478ce STEP: Creating configMap with name cm-test-opt-create-2911b4c6-72a5-4004-a138-86889c388e5e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:45:28.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-696" for this suite. • [SLOW TEST:82.652 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":367,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:45:28.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-38592db4-96d9-4e72-90f3-38697ae8dce2 STEP: Creating a pod to test consume configMaps Apr 22 23:45:28.932: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a" in namespace "projected-3353" to be "Succeeded or Failed" Apr 22 23:45:28.948: INFO: Pod "pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.625074ms Apr 22 23:45:30.953: INFO: Pod "pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021074448s Apr 22 23:45:32.957: INFO: Pod "pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025384955s STEP: Saw pod success Apr 22 23:45:32.957: INFO: Pod "pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a" satisfied condition "Succeeded or Failed" Apr 22 23:45:32.960: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a container projected-configmap-volume-test: STEP: delete the pod Apr 22 23:45:32.993: INFO: Waiting for pod pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a to disappear Apr 22 23:45:33.006: INFO: Pod pod-projected-configmaps-08f31275-2ebe-43b6-9f61-e5d4394f725a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:45:33.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3353" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:45:33.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:45:37.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7292" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":393,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:45:37.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 23:45:37.527: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 23:45:39.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723195937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 23:45:42.561: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:45:42.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1413" for this suite. STEP: Destroying namespace "webhook-1413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.667 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":36,"skipped":393,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:45:42.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d128ca48-f3f3-4ea0-a3eb-9b565dae2db5 STEP: Creating a pod to test consume secrets Apr 22 23:45:42.918: INFO: Waiting up to 5m0s for pod "pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a" in namespace "secrets-682" to be "Succeeded or Failed" Apr 22 23:45:42.935: INFO: Pod "pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.72152ms Apr 22 23:45:45.606: INFO: Pod "pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688289331s Apr 22 23:45:47.610: INFO: Pod "pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.692003673s STEP: Saw pod success Apr 22 23:45:47.610: INFO: Pod "pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a" satisfied condition "Succeeded or Failed" Apr 22 23:45:47.613: INFO: Trying to get logs from node latest-worker pod pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a container secret-volume-test: STEP: delete the pod Apr 22 23:45:47.702: INFO: Waiting for pod pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a to disappear Apr 22 23:45:47.713: INFO: Pod pod-secrets-69d53d84-2b4c-43bd-92c8-783db8d8605a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:45:47.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-682" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":396,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:45:47.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:45:47.891: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 22 23:45:47.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:47.919: INFO: Number of nodes with available pods: 0 Apr 22 23:45:47.919: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:45:48.924: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:48.927: INFO: Number of nodes with available pods: 0 Apr 22 23:45:48.927: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:45:50.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:51.064: INFO: Number of nodes with available pods: 0 Apr 22 23:45:51.064: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:45:51.925: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:51.928: INFO: Number of nodes with available pods: 0 Apr 22 23:45:51.928: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:45:52.927: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:52.931: INFO: Number of nodes with available pods: 1 Apr 22 23:45:52.931: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:45:53.924: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:53.928: INFO: Number of nodes with available pods: 2 Apr 22 23:45:53.928: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 22 23:45:53.958: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:53.958: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:53.977: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:54.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:54.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:54.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:55.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:55.982: INFO: Pod daemon-set-c447p is not available Apr 22 23:45:55.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:55.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:56.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:56.982: INFO: Pod daemon-set-c447p is not available Apr 22 23:45:56.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:56.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:57.983: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:57.983: INFO: Pod daemon-set-c447p is not available Apr 22 23:45:57.983: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:57.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:58.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:58.982: INFO: Pod daemon-set-c447p is not available Apr 22 23:45:58.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:58.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:45:59.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:59.982: INFO: Pod daemon-set-c447p is not available Apr 22 23:45:59.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:45:59.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:00.982: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:00.982: INFO: Pod daemon-set-c447p is not available Apr 22 23:46:00.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:00.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:01.985: INFO: Wrong image for pod: daemon-set-c447p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:01.985: INFO: Pod daemon-set-c447p is not available Apr 22 23:46:01.985: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:01.991: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:03.028: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:03.028: INFO: Pod daemon-set-wt7w9 is not available Apr 22 23:46:03.033: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:03.983: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:03.983: INFO: Pod daemon-set-wt7w9 is not available Apr 22 23:46:03.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:04.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:04.982: INFO: Pod daemon-set-wt7w9 is not available Apr 22 23:46:04.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:05.981: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:05.981: INFO: Pod daemon-set-wt7w9 is not available Apr 22 23:46:05.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:06.982: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:06.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:08.000: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:08.000: INFO: Pod daemon-set-cgx2j is not available Apr 22 23:46:08.026: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:08.995: INFO: Wrong image for pod: daemon-set-cgx2j. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 22 23:46:08.995: INFO: Pod daemon-set-cgx2j is not available Apr 22 23:46:09.017: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:09.982: INFO: Pod daemon-set-twjv5 is not available Apr 22 23:46:09.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 22 23:46:09.990: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:09.993: INFO: Number of nodes with available pods: 1 Apr 22 23:46:09.993: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:46:10.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:11.002: INFO: Number of nodes with available pods: 1 Apr 22 23:46:11.002: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:46:12.137: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:12.140: INFO: Number of nodes with available pods: 1 Apr 22 23:46:12.140: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:46:13.161: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:46:13.188: INFO: Number of nodes with available pods: 2 Apr 22 23:46:13.188: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3623, will wait for the garbage collector to delete the pods Apr 22 23:46:13.310: INFO: Deleting DaemonSet.extensions daemon-set took: 6.83638ms Apr 22 23:46:13.710: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.261256ms Apr 22 23:46:22.814: INFO: Number of nodes with available pods: 0 Apr 22 23:46:22.814: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 23:46:22.817: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3623/daemonsets","resourceVersion":"10248639"},"items":null} Apr 22 23:46:22.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3623/pods","resourceVersion":"10248639"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:46:22.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3623" for this suite. • [SLOW TEST:35.152 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":38,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:46:22.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 22 23:46:22.917: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 22 23:46:33.369: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:46:35.301: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:46:46.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1225" for this suite. • [SLOW TEST:23.957 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":39,"skipped":517,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:46:46.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 22 23:46:46.889: INFO: PodSpec: initContainers in spec.initContainers Apr 22 23:47:32.164: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4248beea-ae09-4094-8228-d509bcd2f076", GenerateName:"", Namespace:"init-container-5942", SelfLink:"/api/v1/namespaces/init-container-5942/pods/pod-init-4248beea-ae09-4094-8228-d509bcd2f076", UID:"c93ffb81-b14d-4042-9234-364e95f25a1e", ResourceVersion:"10248937", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723196006, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"889237892"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dbc7b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005d89500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbc7b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbc7b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbc7b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0042e3a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002526a10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0042e3af0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0042e3b10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0042e3b18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0042e3b1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196007, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196007, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196007, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196006, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.104", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.104"}}, StartTime:(*v1.Time)(0xc00228cee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002526af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002526b60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://618afcf50887f8e485e95654e8382ee238a8b640677a9a80e34d13a687c85fc3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00228cf40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00228cf20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0042e3bcf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:47:32.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5942" for this suite. • [SLOW TEST:45.370 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":40,"skipped":533,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:47:32.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 22 23:47:32.331: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:32.334: INFO: Number of nodes with available pods: 0 Apr 22 23:47:32.334: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:47:33.384: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:33.386: INFO: Number of nodes with available pods: 0 Apr 22 23:47:33.387: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:47:34.362: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:34.365: INFO: Number of nodes with available pods: 0 Apr 22 23:47:34.365: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:47:35.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:35.344: INFO: Number of nodes with available pods: 0 Apr 22 23:47:35.344: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:47:36.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:36.343: INFO: Number of nodes with available pods: 2 Apr 22 23:47:36.343: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 22 23:47:36.372: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:47:36.388: INFO: Number of nodes with available pods: 2 Apr 22 23:47:36.388: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8809, will wait for the garbage collector to delete the pods Apr 22 23:47:37.474: INFO: Deleting DaemonSet.extensions daemon-set took: 22.558819ms Apr 22 23:47:37.774: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238956ms Apr 22 23:47:40.278: INFO: Number of nodes with available pods: 0 Apr 22 23:47:40.278: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 23:47:40.292: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8809/daemonsets","resourceVersion":"10249030"},"items":null} Apr 22 23:47:40.294: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8809/pods","resourceVersion":"10249030"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:47:40.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8809" for this suite. • [SLOW TEST:8.110 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":41,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:47:40.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 22 23:47:44.396: INFO: &Pod{ObjectMeta:{send-events-7d57b127-6107-4804-b25b-96dba4710893 events-8218 /api/v1/namespaces/events-8218/pods/send-events-7d57b127-6107-4804-b25b-96dba4710893 fd124e43-dbcd-4e5e-8b32-89ab7e782908 10249050 0 2020-04-22 23:47:40 +0000 UTC map[name:foo time:363162646] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7jt9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7jt9q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7jt9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:47:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:47:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:47:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-22 23:47:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.200,StartTime:2020-04-22 23:47:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-22 23:47:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://830303b91c4b227df6cb674a7dccd61043eb007472ee03fbc9de9d601fe8b33d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 22 23:47:46.402: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 22 23:47:48.407: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:47:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8218" for this suite. • [SLOW TEST:8.133 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":42,"skipped":554,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:47:48.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:48:01.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4239" for this suite. • [SLOW TEST:13.428 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":43,"skipped":555,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:48:01.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 22 23:48:01.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 22 23:48:02.126: INFO: stderr: "" Apr 22 23:48:02.126: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:48:02.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8505" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":44,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:48:02.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:48:02.216: INFO: Waiting up to 5m0s for pod "busybox-user-65534-dbd8e56c-acdc-4eeb-8e26-208248c97ee4" in namespace "security-context-test-5923" to be "Succeeded or Failed" Apr 22 23:48:02.241: INFO: Pod "busybox-user-65534-dbd8e56c-acdc-4eeb-8e26-208248c97ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.680173ms Apr 22 23:48:04.383: INFO: Pod "busybox-user-65534-dbd8e56c-acdc-4eeb-8e26-208248c97ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166767119s Apr 22 23:48:06.387: INFO: Pod "busybox-user-65534-dbd8e56c-acdc-4eeb-8e26-208248c97ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170843407s Apr 22 23:48:06.387: INFO: Pod "busybox-user-65534-dbd8e56c-acdc-4eeb-8e26-208248c97ee4" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:48:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5923" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":628,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:48:06.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 22 23:48:15.371: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 23:48:15.388: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 23:48:17.388: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 23:48:17.391: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 23:48:19.388: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 23:48:19.392: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 23:48:21.388: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 23:48:21.392: INFO: Pod pod-with-poststart-exec-hook still exists Apr 22 23:48:23.388: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 22 23:48:23.393: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:48:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2794" for this suite. • [SLOW TEST:17.007 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:48:23.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-d9e38980-e7d2-4faa-b459-fde439b16315 in namespace container-probe-9213 Apr 22 23:48:27.507: INFO: Started pod liveness-d9e38980-e7d2-4faa-b459-fde439b16315 in namespace container-probe-9213 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:48:27.510: INFO: Initial restart count of pod liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is 0 Apr 22 23:48:39.536: INFO: Restart count of pod container-probe-9213/liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is now 1 (12.026266508s elapsed) Apr 22 23:48:59.575: INFO: Restart count of pod container-probe-9213/liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is now 2 (32.065701916s elapsed) Apr 22 23:49:19.618: INFO: Restart count of pod container-probe-9213/liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is now 3 (52.108452187s elapsed) Apr 22 23:49:39.660: INFO: Restart count of pod container-probe-9213/liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is now 4 (1m12.150881792s elapsed) Apr 22 23:50:47.819: INFO: Restart count of pod container-probe-9213/liveness-d9e38980-e7d2-4faa-b459-fde439b16315 is now 5 (2m20.309181677s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:50:47.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9213" for this suite. • [SLOW TEST:144.441 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":656,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:50:47.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3784.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3784.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3784.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.118_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3784.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3784.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3784.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3784.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3784.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.118_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 22 23:50:54.314: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.321: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.325: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.348: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.351: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.354: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.357: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:54.375: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:50:59.380: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.388: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.391: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.413: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.421: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:50:59.436: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:51:04.380: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.410: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:04.436: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:51:09.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.386: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.411: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.420: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:09.437: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:51:14.381: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.406: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.409: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.411: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.413: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:14.429: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:51:19.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.419: INFO: Unable to read jessie_udp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.427: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local from pod dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf: the server could not find the requested resource (get pods dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf) Apr 22 23:51:19.444: INFO: Lookups using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf failed for: [wheezy_udp@dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@dns-test-service.dns-3784.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_udp@dns-test-service.dns-3784.svc.cluster.local jessie_tcp@dns-test-service.dns-3784.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3784.svc.cluster.local] Apr 22 23:51:24.439: INFO: DNS probes using dns-3784/dns-test-c4cad59b-c24d-4085-a530-1ad28815c9cf succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:25.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3784" for this suite. • [SLOW TEST:37.181 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":48,"skipped":665,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:25.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 22 23:51:25.524: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 23:51:27.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196285, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196285, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196285, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723196285, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 22 23:51:30.576: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:30.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6574" for this suite. STEP: Destroying namespace "webhook-6574-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.722 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":49,"skipped":674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:30.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-ea532fdc-d046-4c58-8158-2557a00d8cf1 STEP: Creating a pod to test consume configMaps Apr 22 23:51:30.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a" in namespace "projected-1467" to be "Succeeded or Failed" Apr 22 23:51:30.842: INFO: Pod "pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.5879ms Apr 22 23:51:32.846: INFO: Pod "pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014683796s Apr 22 23:51:34.850: INFO: Pod "pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019052412s STEP: Saw pod success Apr 22 23:51:34.850: INFO: Pod "pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a" satisfied condition "Succeeded or Failed" Apr 22 23:51:34.853: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a container projected-configmap-volume-test: STEP: delete the pod Apr 22 23:51:34.920: INFO: Waiting for pod pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a to disappear Apr 22 23:51:34.925: INFO: Pod pod-projected-configmaps-48de8d26-e3a8-42b5-abfa-bfd5d80c898a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:34.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1467" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:34.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-62e594a7-8e03-45a8-ac1a-c913543a8a6d STEP: Creating a pod to test consume configMaps Apr 22 23:51:35.165: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d" in namespace "projected-7560" to be "Succeeded or Failed" Apr 22 23:51:35.192: INFO: Pod "pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.29945ms Apr 22 23:51:37.197: INFO: Pod "pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031943515s Apr 22 23:51:39.201: INFO: Pod "pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036309452s STEP: Saw pod success Apr 22 23:51:39.201: INFO: Pod "pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d" satisfied condition "Succeeded or Failed" Apr 22 23:51:39.205: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d container projected-configmap-volume-test: STEP: delete the pod Apr 22 23:51:39.234: INFO: Waiting for pod pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d to disappear Apr 22 23:51:39.246: INFO: Pod pod-projected-configmaps-7f013e8d-8e81-4ebd-ae2d-3d9f593f3a1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:39.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7560" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:39.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 22 23:51:39.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe" in namespace "projected-5964" to be "Succeeded or Failed" Apr 22 23:51:39.360: INFO: Pod "downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301285ms Apr 22 23:51:41.404: INFO: Pod "downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046747713s Apr 22 23:51:43.409: INFO: Pod "downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051567353s STEP: Saw pod success Apr 22 23:51:43.409: INFO: Pod "downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe" satisfied condition "Succeeded or Failed" Apr 22 23:51:43.412: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe container client-container: STEP: delete the pod Apr 22 23:51:43.459: INFO: Waiting for pod downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe to disappear Apr 22 23:51:43.468: INFO: Pod downwardapi-volume-9ea04181-1daa-49f3-8044-96aea6e984fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:43.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5964" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":764,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:43.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:51:43.530: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 23:51:46.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2111 create -f -' Apr 22 23:51:49.158: INFO: stderr: "" Apr 22 23:51:49.158: INFO: stdout: "e2e-test-crd-publish-openapi-2085-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 23:51:49.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2111 delete e2e-test-crd-publish-openapi-2085-crds test-cr' Apr 22 23:51:49.267: INFO: stderr: "" Apr 22 23:51:49.267: INFO: stdout: "e2e-test-crd-publish-openapi-2085-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 22 23:51:49.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2111 apply -f -' Apr 22 23:51:49.501: INFO: stderr: "" Apr 22 23:51:49.501: INFO: stdout: "e2e-test-crd-publish-openapi-2085-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 22 23:51:49.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2111 delete e2e-test-crd-publish-openapi-2085-crds test-cr' Apr 22 23:51:49.607: INFO: stderr: "" Apr 22 23:51:49.607: INFO: stdout: "e2e-test-crd-publish-openapi-2085-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 22 23:51:49.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2085-crds' Apr 22 23:51:49.845: INFO: stderr: "" Apr 22 23:51:49.845: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2085-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:52.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2111" for this suite. • [SLOW TEST:9.279 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":53,"skipped":764,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:52.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 22 23:51:52.832: INFO: Waiting up to 5m0s for pod "pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24" in namespace "emptydir-6168" to be "Succeeded or Failed" Apr 22 23:51:52.863: INFO: Pod "pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24": Phase="Pending", Reason="", readiness=false. Elapsed: 30.937726ms Apr 22 23:51:54.866: INFO: Pod "pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034138188s Apr 22 23:51:56.870: INFO: Pod "pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038523798s STEP: Saw pod success Apr 22 23:51:56.871: INFO: Pod "pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24" satisfied condition "Succeeded or Failed" Apr 22 23:51:56.874: INFO: Trying to get logs from node latest-worker2 pod pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24 container test-container: STEP: delete the pod Apr 22 23:51:56.892: INFO: Waiting for pod pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24 to disappear Apr 22 23:51:56.937: INFO: Pod pod-ca7129c4-55a7-4a29-a6ed-f49367d96b24 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:51:56.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6168" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:51:56.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 22 23:51:56.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc" in namespace "projected-6140" to be "Succeeded or Failed" Apr 22 23:51:57.007: INFO: Pod "downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.8268ms Apr 22 23:51:59.027: INFO: Pod "downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033661003s Apr 22 23:52:01.030: INFO: Pod "downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037015317s STEP: Saw pod success Apr 22 23:52:01.030: INFO: Pod "downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc" satisfied condition "Succeeded or Failed" Apr 22 23:52:01.033: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc container client-container: STEP: delete the pod Apr 22 23:52:01.123: INFO: Waiting for pod downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc to disappear Apr 22 23:52:01.158: INFO: Pod downwardapi-volume-6c481d1b-605d-417c-82f7-ce9153bd50bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:01.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6140" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":812,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:01.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:52:01.284: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:52:03.288: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:52:05.287: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:07.289: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:09.289: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:11.287: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:13.288: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:15.288: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:17.289: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:19.288: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:21.289: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = false) Apr 22 23:52:23.288: INFO: The status of Pod test-webserver-3860b28f-1eac-491c-b118-136bb0ef96a8 is Running (Ready = true) Apr 22 23:52:23.291: INFO: Container started at 2020-04-22 23:52:03 +0000 UTC, pod became ready at 2020-04-22 23:52:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:23.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1895" for this suite. • [SLOW TEST:22.135 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":818,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:23.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-96d4a744-6483-4639-af15-ac5988f0765c STEP: Creating a pod to test consume secrets Apr 22 23:52:23.373: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e" in namespace "projected-4182" to be "Succeeded or Failed" Apr 22 23:52:23.397: INFO: Pod "pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.33893ms Apr 22 23:52:25.488: INFO: Pod "pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11503047s Apr 22 23:52:27.492: INFO: Pod "pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11890907s STEP: Saw pod success Apr 22 23:52:27.492: INFO: Pod "pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e" satisfied condition "Succeeded or Failed" Apr 22 23:52:27.495: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e container projected-secret-volume-test: STEP: delete the pod Apr 22 23:52:27.515: INFO: Waiting for pod pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e to disappear Apr 22 23:52:27.520: INFO: Pod pod-projected-secrets-90680797-20ea-420a-93fc-5cb7cee04b4e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:27.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4182" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":818,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:27.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:52:27.594: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7ba50457-96cc-48fe-8ab9-6e670b5997f5" in namespace "security-context-test-9868" to be "Succeeded or Failed" Apr 22 23:52:27.610: INFO: Pod "busybox-readonly-false-7ba50457-96cc-48fe-8ab9-6e670b5997f5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.768883ms Apr 22 23:52:29.614: INFO: Pod "busybox-readonly-false-7ba50457-96cc-48fe-8ab9-6e670b5997f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02026273s Apr 22 23:52:31.661: INFO: Pod "busybox-readonly-false-7ba50457-96cc-48fe-8ab9-6e670b5997f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067472707s Apr 22 23:52:31.661: INFO: Pod "busybox-readonly-false-7ba50457-96cc-48fe-8ab9-6e670b5997f5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:31.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9868" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":830,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:31.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 22 23:52:31.764: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:38.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8963" for this suite. • [SLOW TEST:7.136 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":59,"skipped":837,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:38.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 22 23:52:38.859: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:45.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1256" for this suite. • [SLOW TEST:7.030 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":60,"skipped":838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:45.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 22 23:52:45.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1699' Apr 22 23:52:46.266: INFO: stderr: "" Apr 22 23:52:46.266: INFO: stdout: "pod/pause created\n" Apr 22 23:52:46.266: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 22 23:52:46.266: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1699" to be "running and ready" Apr 22 23:52:46.269: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17359ms Apr 22 23:52:48.272: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00640995s Apr 22 23:52:50.276: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010656151s Apr 22 23:52:50.276: INFO: Pod "pause" satisfied condition "running and ready" Apr 22 23:52:50.276: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 22 23:52:50.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1699' Apr 22 23:52:50.378: INFO: stderr: "" Apr 22 23:52:50.378: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 22 23:52:50.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1699' Apr 22 23:52:50.480: INFO: stderr: "" Apr 22 23:52:50.480: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 22 23:52:50.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1699' Apr 22 23:52:50.586: INFO: stderr: "" Apr 22 23:52:50.586: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 22 23:52:50.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1699' Apr 22 23:52:50.698: INFO: stderr: "" Apr 22 23:52:50.698: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 22 23:52:50.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1699' Apr 22 23:52:50.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 23:52:50.808: INFO: stdout: "pod \"pause\" force deleted\n" Apr 22 23:52:50.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1699' Apr 22 23:52:50.929: INFO: stderr: "No resources found in kubectl-1699 namespace.\n" Apr 22 23:52:50.929: INFO: stdout: "" Apr 22 23:52:50.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1699 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 23:52:51.149: INFO: stderr: "" Apr 22 23:52:51.149: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:52:51.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1699" for this suite. • [SLOW TEST:5.358 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":61,"skipped":869,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:52:51.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0422 23:53:01.346632 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 23:53:01.346: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:01.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2804" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":62,"skipped":873,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:01.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:02.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5680" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":63,"skipped":885,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:02.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-98a4fc3b-9c16-4895-8058-7055bc98daae STEP: Creating a pod to test consume secrets Apr 22 23:53:02.383: INFO: Waiting up to 5m0s for pod "pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f" in namespace "secrets-7498" to be "Succeeded or Failed" Apr 22 23:53:02.390: INFO: Pod "pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702606ms Apr 22 23:53:04.406: INFO: Pod "pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023197957s Apr 22 23:53:06.423: INFO: Pod "pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039868531s STEP: Saw pod success Apr 22 23:53:06.423: INFO: Pod "pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f" satisfied condition "Succeeded or Failed" Apr 22 23:53:06.426: INFO: Trying to get logs from node latest-worker pod pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f container secret-env-test: STEP: delete the pod Apr 22 23:53:06.444: INFO: Waiting for pod pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f to disappear Apr 22 23:53:06.455: INFO: Pod pod-secrets-a5531c15-cf18-4154-8336-a1ef496aa56f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:06.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7498" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":890,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:06.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 22 23:53:06.517: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:53:09.463: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:19.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9446" for this suite. • [SLOW TEST:12.550 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":65,"skipped":895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:19.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 22 23:53:19.219: INFO: Create a RollingUpdate DaemonSet Apr 22 23:53:19.222: INFO: Check that daemon pods launch on every node of the cluster Apr 22 23:53:19.231: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:19.242: INFO: Number of nodes with available pods: 0 Apr 22 23:53:19.242: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:53:20.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:20.250: INFO: Number of nodes with available pods: 0 Apr 22 23:53:20.250: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:53:21.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:21.250: INFO: Number of nodes with available pods: 0 Apr 22 23:53:21.250: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:53:22.256: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:22.259: INFO: Number of nodes with available pods: 0 Apr 22 23:53:22.259: INFO: Node latest-worker is running more than one daemon pod Apr 22 23:53:23.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:23.251: INFO: Number of nodes with available pods: 2 Apr 22 23:53:23.251: INFO: Number of running nodes: 2, number of available pods: 2 Apr 22 23:53:23.251: INFO: Update the DaemonSet to trigger a rollout Apr 22 23:53:23.258: INFO: Updating DaemonSet daemon-set Apr 22 23:53:33.278: INFO: Roll back the DaemonSet before rollout is complete Apr 22 23:53:33.282: INFO: Updating DaemonSet daemon-set Apr 22 23:53:33.282: INFO: Make sure DaemonSet rollback is complete Apr 22 23:53:33.288: INFO: Wrong image for pod: daemon-set-cjmb6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 23:53:33.288: INFO: Pod daemon-set-cjmb6 is not available Apr 22 23:53:33.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:34.308: INFO: Wrong image for pod: daemon-set-cjmb6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 23:53:34.308: INFO: Pod daemon-set-cjmb6 is not available Apr 22 23:53:34.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:35.447: INFO: Wrong image for pod: daemon-set-cjmb6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 23:53:35.447: INFO: Pod daemon-set-cjmb6 is not available Apr 22 23:53:35.451: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:36.308: INFO: Wrong image for pod: daemon-set-cjmb6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 22 23:53:36.308: INFO: Pod daemon-set-cjmb6 is not available Apr 22 23:53:36.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 23:53:37.308: INFO: Pod daemon-set-lf6b5 is not available Apr 22 23:53:37.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8717, will wait for the garbage collector to delete the pods Apr 22 23:53:37.382: INFO: Deleting DaemonSet.extensions daemon-set took: 9.940655ms Apr 22 23:53:37.682: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.19116ms Apr 22 23:53:43.085: INFO: Number of nodes with available pods: 0 Apr 22 23:53:43.085: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 23:53:43.088: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8717/daemonsets","resourceVersion":"10250936"},"items":null} Apr 22 23:53:43.091: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8717/pods","resourceVersion":"10250936"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:43.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8717" for this suite. • [SLOW TEST:24.097 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":66,"skipped":928,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:43.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 22 23:53:43.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4394' Apr 22 23:53:43.475: INFO: stderr: "" Apr 22 23:53:43.475: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 22 23:53:43.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4394' Apr 22 23:53:43.627: INFO: stderr: "" Apr 22 23:53:43.627: INFO: stdout: "update-demo-nautilus-5mbsv update-demo-nautilus-t96ck " Apr 22 23:53:43.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mbsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4394' Apr 22 23:53:43.747: INFO: stderr: "" Apr 22 23:53:43.747: INFO: stdout: "" Apr 22 23:53:43.747: INFO: update-demo-nautilus-5mbsv is created but not running Apr 22 23:53:48.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4394' Apr 22 23:53:48.845: INFO: stderr: "" Apr 22 23:53:48.845: INFO: stdout: "update-demo-nautilus-5mbsv update-demo-nautilus-t96ck " Apr 22 23:53:48.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mbsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4394' Apr 22 23:53:48.969: INFO: stderr: "" Apr 22 23:53:48.969: INFO: stdout: "true" Apr 22 23:53:48.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5mbsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4394' Apr 22 23:53:49.054: INFO: stderr: "" Apr 22 23:53:49.054: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 23:53:49.054: INFO: validating pod update-demo-nautilus-5mbsv Apr 22 23:53:49.058: INFO: got data: { "image": "nautilus.jpg" } Apr 22 23:53:49.058: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 23:53:49.058: INFO: update-demo-nautilus-5mbsv is verified up and running Apr 22 23:53:49.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t96ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4394' Apr 22 23:53:49.152: INFO: stderr: "" Apr 22 23:53:49.152: INFO: stdout: "true" Apr 22 23:53:49.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t96ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4394' Apr 22 23:53:49.252: INFO: stderr: "" Apr 22 23:53:49.252: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 22 23:53:49.252: INFO: validating pod update-demo-nautilus-t96ck Apr 22 23:53:49.255: INFO: got data: { "image": "nautilus.jpg" } Apr 22 23:53:49.255: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 23:53:49.255: INFO: update-demo-nautilus-t96ck is verified up and running STEP: using delete to clean up resources Apr 22 23:53:49.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4394' Apr 22 23:53:49.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 23:53:49.360: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 23:53:49.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4394' Apr 22 23:53:49.468: INFO: stderr: "No resources found in kubectl-4394 namespace.\n" Apr 22 23:53:49.468: INFO: stdout: "" Apr 22 23:53:49.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4394 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 23:53:49.575: INFO: stderr: "" Apr 22 23:53:49.575: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4394" for this suite. • [SLOW TEST:6.473 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":67,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:49.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:53:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7187" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:53:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7152 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 22 23:53:53.817: INFO: Found 0 stateful pods, waiting for 3 Apr 22 23:54:03.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:03.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:03.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 22 23:54:13.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:13.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:13.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 22 23:54:13.848: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 22 23:54:23.884: INFO: Updating stateful set ss2 Apr 22 23:54:23.913: INFO: Waiting for Pod statefulset-7152/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 22 23:54:34.083: INFO: Found 2 stateful pods, waiting for 3 Apr 22 23:54:44.088: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:44.088: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 23:54:44.088: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 22 23:54:44.112: INFO: Updating stateful set ss2 Apr 22 23:54:44.185: INFO: Waiting for Pod statefulset-7152/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 22 23:54:54.210: INFO: Updating stateful set ss2 Apr 22 23:54:54.249: INFO: Waiting for StatefulSet statefulset-7152/ss2 to complete update Apr 22 23:54:54.249: INFO: Waiting for Pod statefulset-7152/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 22 23:55:04.257: INFO: Waiting for StatefulSet statefulset-7152/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 22 23:55:14.258: INFO: Deleting all statefulset in ns statefulset-7152 Apr 22 23:55:14.260: INFO: Scaling statefulset ss2 to 0 Apr 22 23:55:44.278: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 23:55:44.282: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:55:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7152" for this suite. • [SLOW TEST:110.569 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":69,"skipped":1038,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:55:44.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 22 23:55:48.394: INFO: Pod pod-hostip-04ea84ac-a94c-45c8-95b4-48f248766abd has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:55:48.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5294" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1047,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:55:48.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-bn5z STEP: Creating a pod to test atomic-volume-subpath Apr 22 23:55:48.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bn5z" in namespace "subpath-4054" to be "Succeeded or Failed" Apr 22 23:55:48.473: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633883ms Apr 22 23:55:50.478: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009272551s Apr 22 23:55:52.482: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 4.013575041s Apr 22 23:55:54.487: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 6.01789538s Apr 22 23:55:56.491: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 8.022265359s Apr 22 23:55:58.495: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 10.026469076s Apr 22 23:56:00.499: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 12.030116013s Apr 22 23:56:02.504: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 14.034645948s Apr 22 23:56:04.508: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 16.039151461s Apr 22 23:56:06.521: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 18.052182785s Apr 22 23:56:08.526: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 20.056789063s Apr 22 23:56:10.531: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Running", Reason="", readiness=true. Elapsed: 22.061614081s Apr 22 23:56:12.535: INFO: Pod "pod-subpath-test-downwardapi-bn5z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065684122s STEP: Saw pod success Apr 22 23:56:12.535: INFO: Pod "pod-subpath-test-downwardapi-bn5z" satisfied condition "Succeeded or Failed" Apr 22 23:56:12.538: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-bn5z container test-container-subpath-downwardapi-bn5z: STEP: delete the pod Apr 22 23:56:12.570: INFO: Waiting for pod pod-subpath-test-downwardapi-bn5z to disappear Apr 22 23:56:12.574: INFO: Pod pod-subpath-test-downwardapi-bn5z no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bn5z Apr 22 23:56:12.574: INFO: Deleting pod "pod-subpath-test-downwardapi-bn5z" in namespace "subpath-4054" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:56:12.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4054" for this suite. • [SLOW TEST:24.181 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":71,"skipped":1049,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:56:12.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4012, will wait for the garbage collector to delete the pods Apr 22 23:56:16.716: INFO: Deleting Job.batch foo took: 6.524882ms Apr 22 23:56:17.016: INFO: Terminating Job.batch foo pods took: 300.282655ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:56:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4012" for this suite. • [SLOW TEST:40.273 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":72,"skipped":1059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:56:52.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 22 23:56:52.911: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:56:52.922: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:56:52.924: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 22 23:56:52.929: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 22 23:56:52.929: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 23:56:52.929: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 22 23:56:52.929: INFO: Container kube-proxy ready: true, restart count 0 Apr 22 23:56:52.929: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 22 23:56:52.945: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 22 23:56:52.945: INFO: Container kindnet-cni ready: true, restart count 0 Apr 22 23:56:52.945: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 22 23:56:52.945: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 22 23:56:53.050: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 22 23:56:53.050: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 22 23:56:53.050: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 22 23:56:53.050: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 22 23:56:53.050: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 22 23:56:53.056: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f.1608499e56fbacd1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-357/filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f.1608499ea0615c74], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f.1608499edb3e45c3], Reason = [Created], Message = [Created container filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f] STEP: Considering event: Type = [Normal], Name = [filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f.1608499eef44eadb], Reason = [Started], Message = [Started container filler-pod-6c6d7a7a-9f4b-4a34-aba8-f14c436ff70f] STEP: Considering event: Type = [Normal], Name = [filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b.1608499e5b4bd1df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-357/filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b.1608499ed0133017], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b.1608499f05ac460f], Reason = [Created], Message = [Created container filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b] STEP: Considering event: Type = [Normal], Name = [filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b.1608499f15c66a3f], Reason = [Started], Message = [Started container filler-pod-e573448b-90bc-469c-97cf-8b22ba00f56b] STEP: Considering event: Type = [Warning], Name = [additional-pod.1608499f4acbab97], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:56:58.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-357" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.416 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":73,"skipped":1082,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:56:58.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-0af49326-be14-48dd-9c4e-df40b85a817c in namespace container-probe-4712 Apr 22 23:57:02.383: INFO: Started pod liveness-0af49326-be14-48dd-9c4e-df40b85a817c in namespace container-probe-4712 STEP: checking the pod's current state and verifying that restartCount is present Apr 22 23:57:02.386: INFO: Initial restart count of pod liveness-0af49326-be14-48dd-9c4e-df40b85a817c is 0 Apr 22 23:57:26.503: INFO: Restart count of pod container-probe-4712/liveness-0af49326-be14-48dd-9c4e-df40b85a817c is now 1 (24.11654996s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:26.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4712" for this suite. • [SLOW TEST:28.277 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1086,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:26.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-14d0ff5e-6728-4977-98f6-2959ed7ab900 STEP: Creating a pod to test consume secrets Apr 22 23:57:26.632: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e" in namespace "projected-6588" to be "Succeeded or Failed" Apr 22 23:57:26.636: INFO: Pod "pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.570684ms Apr 22 23:57:28.639: INFO: Pod "pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00741728s Apr 22 23:57:30.644: INFO: Pod "pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011639481s STEP: Saw pod success Apr 22 23:57:30.644: INFO: Pod "pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e" satisfied condition "Succeeded or Failed" Apr 22 23:57:30.647: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e container projected-secret-volume-test: STEP: delete the pod Apr 22 23:57:30.679: INFO: Waiting for pod pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e to disappear Apr 22 23:57:30.684: INFO: Pod pod-projected-secrets-ee997d1a-3bea-4e59-8c9e-2b0125e0212e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:30.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6588" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:30.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 22 23:57:30.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e" in namespace "downward-api-9011" to be "Succeeded or Failed" Apr 22 23:57:30.774: INFO: Pod "downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.867121ms Apr 22 23:57:32.778: INFO: Pod "downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015518028s Apr 22 23:57:34.782: INFO: Pod "downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020098984s STEP: Saw pod success Apr 22 23:57:34.782: INFO: Pod "downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e" satisfied condition "Succeeded or Failed" Apr 22 23:57:34.785: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e container client-container: STEP: delete the pod Apr 22 23:57:34.805: INFO: Waiting for pod downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e to disappear Apr 22 23:57:34.810: INFO: Pod downwardapi-volume-8b3a5953-fc2c-4955-8d29-a26e329cd93e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:34.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9011" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1162,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:34.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 22 23:57:34.871: INFO: Waiting up to 5m0s for pod "client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0" in namespace "containers-3188" to be "Succeeded or Failed" Apr 22 23:57:34.892: INFO: Pod "client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.606073ms Apr 22 23:57:36.911: INFO: Pod "client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040005377s Apr 22 23:57:38.916: INFO: Pod "client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044398499s STEP: Saw pod success Apr 22 23:57:38.916: INFO: Pod "client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0" satisfied condition "Succeeded or Failed" Apr 22 23:57:38.919: INFO: Trying to get logs from node latest-worker pod client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0 container test-container: STEP: delete the pod Apr 22 23:57:38.937: INFO: Waiting for pod client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0 to disappear Apr 22 23:57:38.941: INFO: Pod client-containers-eafdd49a-c54a-467b-9844-c5f018b43aa0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3188" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1177,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:38.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 22 23:57:39.003: INFO: namespace kubectl-5709 Apr 22 23:57:39.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5709' Apr 22 23:57:39.253: INFO: stderr: "" Apr 22 23:57:39.253: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 22 23:57:40.258: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 23:57:40.258: INFO: Found 0 / 1 Apr 22 23:57:41.258: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 23:57:41.258: INFO: Found 0 / 1 Apr 22 23:57:42.258: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 23:57:42.258: INFO: Found 1 / 1 Apr 22 23:57:42.258: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 23:57:42.261: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 23:57:42.261: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 23:57:42.261: INFO: wait on agnhost-master startup in kubectl-5709 Apr 22 23:57:42.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-lrclc agnhost-master --namespace=kubectl-5709' Apr 22 23:57:42.376: INFO: stderr: "" Apr 22 23:57:42.376: INFO: stdout: "Paused\n" STEP: exposing RC Apr 22 23:57:42.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5709' Apr 22 23:57:42.530: INFO: stderr: "" Apr 22 23:57:42.530: INFO: stdout: "service/rm2 exposed\n" Apr 22 23:57:42.541: INFO: Service rm2 in namespace kubectl-5709 found. STEP: exposing service Apr 22 23:57:44.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5709' Apr 22 23:57:44.704: INFO: stderr: "" Apr 22 23:57:44.704: INFO: stdout: "service/rm3 exposed\n" Apr 22 23:57:44.708: INFO: Service rm3 in namespace kubectl-5709 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:46.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5709" for this suite. • [SLOW TEST:7.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":78,"skipped":1179,"failed":0} SSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:46.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 22 23:57:46.776: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 22 23:57:46.780: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 22 23:57:46.780: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 22 23:57:46.816: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 22 23:57:46.816: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 22 23:57:46.828: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 22 23:57:46.828: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 22 23:57:53.964: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:57:53.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8244" for this suite. • [SLOW TEST:7.308 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":79,"skipped":1182,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:57:54.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qjxl STEP: Creating a pod to test atomic-volume-subpath Apr 22 23:57:54.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qjxl" in namespace "subpath-2525" to be "Succeeded or Failed" Apr 22 23:57:54.188: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Pending", Reason="", readiness=false. Elapsed: 22.217186ms Apr 22 23:57:56.253: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087548814s Apr 22 23:57:58.258: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 4.091861773s Apr 22 23:58:00.331: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 6.165365218s Apr 22 23:58:02.335: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 8.169218274s Apr 22 23:58:04.340: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 10.173877076s Apr 22 23:58:06.343: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 12.177348494s Apr 22 23:58:08.347: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 14.181301051s Apr 22 23:58:10.351: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 16.185537536s Apr 22 23:58:12.355: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 18.18887752s Apr 22 23:58:14.359: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 20.192921125s Apr 22 23:58:16.363: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 22.197228822s Apr 22 23:58:18.626: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Running", Reason="", readiness=true. Elapsed: 24.46058942s Apr 22 23:58:20.631: INFO: Pod "pod-subpath-test-configmap-qjxl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.464928733s STEP: Saw pod success Apr 22 23:58:20.631: INFO: Pod "pod-subpath-test-configmap-qjxl" satisfied condition "Succeeded or Failed" Apr 22 23:58:20.634: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-qjxl container test-container-subpath-configmap-qjxl: STEP: delete the pod Apr 22 23:58:20.668: INFO: Waiting for pod pod-subpath-test-configmap-qjxl to disappear Apr 22 23:58:20.738: INFO: Pod pod-subpath-test-configmap-qjxl no longer exists STEP: Deleting pod pod-subpath-test-configmap-qjxl Apr 22 23:58:20.738: INFO: Deleting pod "pod-subpath-test-configmap-qjxl" in namespace "subpath-2525" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:58:20.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2525" for this suite. • [SLOW TEST:26.716 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":80,"skipped":1201,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:58:20.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-1ef6a0ad-dbdb-41c8-b155-bb42a900c3bd STEP: Creating secret with name s-test-opt-upd-f0caf9bf-0370-411f-a809-6ad26e3d2899 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1ef6a0ad-dbdb-41c8-b155-bb42a900c3bd STEP: Updating secret s-test-opt-upd-f0caf9bf-0370-411f-a809-6ad26e3d2899 STEP: Creating secret with name s-test-opt-create-5407357d-89b6-498a-8aad-2ca1d8cf2dfa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:59:43.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7232" for this suite. • [SLOW TEST:82.530 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1213,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:59:43.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 22 23:59:43.352: INFO: Waiting up to 5m0s for pod "pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336" in namespace "emptydir-7055" to be "Succeeded or Failed" Apr 22 23:59:43.358: INFO: Pod "pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177182ms Apr 22 23:59:45.363: INFO: Pod "pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011286418s Apr 22 23:59:47.368: INFO: Pod "pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015455722s STEP: Saw pod success Apr 22 23:59:47.368: INFO: Pod "pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336" satisfied condition "Succeeded or Failed" Apr 22 23:59:47.370: INFO: Trying to get logs from node latest-worker pod pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336 container test-container: STEP: delete the pod Apr 22 23:59:47.422: INFO: Waiting for pod pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336 to disappear Apr 22 23:59:47.429: INFO: Pod pod-34c6eeed-428a-48d6-bb5a-f4c5b9fd0336 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:59:47.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7055" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:59:47.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0422 23:59:48.596356 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 22 23:59:48.596: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:59:48.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8341" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":83,"skipped":1277,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:59:48.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 22 23:59:55.288: INFO: Successfully updated pod "annotationupdatea5120b59-e459-4afd-a068-11a73dca95c0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 22 23:59:57.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7788" for this suite. • [SLOW TEST:8.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1284,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 22 23:59:57.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 22 23:59:57.402: INFO: Waiting up to 5m0s for pod "var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d" in namespace "var-expansion-9023" to be "Succeeded or Failed" Apr 22 23:59:57.406: INFO: Pod "var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.858646ms Apr 22 23:59:59.409: INFO: Pod "var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006678671s Apr 23 00:00:01.413: INFO: Pod "var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011094429s STEP: Saw pod success Apr 23 00:00:01.413: INFO: Pod "var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d" satisfied condition "Succeeded or Failed" Apr 23 00:00:01.416: INFO: Trying to get logs from node latest-worker2 pod var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d container dapi-container: STEP: delete the pod Apr 23 00:00:01.444: INFO: Waiting for pod var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d to disappear Apr 23 00:00:01.448: INFO: Pod var-expansion-6da40b0b-5155-437b-afad-efa204e48b3d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:00:01.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9023" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1292,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:00:01.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 23 00:00:01.521: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:00:17.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1840" for this suite. • [SLOW TEST:15.834 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":86,"skipped":1293,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:00:17.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 23 00:00:17.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6938' Apr 23 00:00:17.660: INFO: stderr: "" Apr 23 00:00:17.660: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 00:00:17.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:17.793: INFO: stderr: "" Apr 23 00:00:17.793: INFO: stdout: "update-demo-nautilus-9jcrw update-demo-nautilus-cmfpm " Apr 23 00:00:17.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:17.891: INFO: stderr: "" Apr 23 00:00:17.891: INFO: stdout: "" Apr 23 00:00:17.891: INFO: update-demo-nautilus-9jcrw is created but not running Apr 23 00:00:22.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:22.985: INFO: stderr: "" Apr 23 00:00:22.985: INFO: stdout: "update-demo-nautilus-9jcrw update-demo-nautilus-cmfpm " Apr 23 00:00:22.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:23.084: INFO: stderr: "" Apr 23 00:00:23.084: INFO: stdout: "true" Apr 23 00:00:23.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:23.185: INFO: stderr: "" Apr 23 00:00:23.185: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 00:00:23.185: INFO: validating pod update-demo-nautilus-9jcrw Apr 23 00:00:23.190: INFO: got data: { "image": "nautilus.jpg" } Apr 23 00:00:23.190: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 00:00:23.190: INFO: update-demo-nautilus-9jcrw is verified up and running Apr 23 00:00:23.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cmfpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:23.280: INFO: stderr: "" Apr 23 00:00:23.280: INFO: stdout: "true" Apr 23 00:00:23.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cmfpm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:23.379: INFO: stderr: "" Apr 23 00:00:23.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 00:00:23.380: INFO: validating pod update-demo-nautilus-cmfpm Apr 23 00:00:23.384: INFO: got data: { "image": "nautilus.jpg" } Apr 23 00:00:23.384: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 00:00:23.384: INFO: update-demo-nautilus-cmfpm is verified up and running STEP: scaling down the replication controller Apr 23 00:00:23.387: INFO: scanned /root for discovery docs: Apr 23 00:00:23.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6938' Apr 23 00:00:24.524: INFO: stderr: "" Apr 23 00:00:24.524: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 00:00:24.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:24.620: INFO: stderr: "" Apr 23 00:00:24.620: INFO: stdout: "update-demo-nautilus-9jcrw update-demo-nautilus-cmfpm " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 23 00:00:29.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:29.718: INFO: stderr: "" Apr 23 00:00:29.718: INFO: stdout: "update-demo-nautilus-9jcrw update-demo-nautilus-cmfpm " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 23 00:00:34.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:34.811: INFO: stderr: "" Apr 23 00:00:34.811: INFO: stdout: "update-demo-nautilus-9jcrw " Apr 23 00:00:34.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:34.897: INFO: stderr: "" Apr 23 00:00:34.897: INFO: stdout: "true" Apr 23 00:00:34.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:34.996: INFO: stderr: "" Apr 23 00:00:34.996: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 00:00:34.996: INFO: validating pod update-demo-nautilus-9jcrw Apr 23 00:00:34.999: INFO: got data: { "image": "nautilus.jpg" } Apr 23 00:00:35.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 00:00:35.000: INFO: update-demo-nautilus-9jcrw is verified up and running STEP: scaling up the replication controller Apr 23 00:00:35.002: INFO: scanned /root for discovery docs: Apr 23 00:00:35.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6938' Apr 23 00:00:36.130: INFO: stderr: "" Apr 23 00:00:36.130: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 00:00:36.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:36.228: INFO: stderr: "" Apr 23 00:00:36.228: INFO: stdout: "update-demo-nautilus-6p4m6 update-demo-nautilus-9jcrw " Apr 23 00:00:36.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p4m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:36.323: INFO: stderr: "" Apr 23 00:00:36.323: INFO: stdout: "" Apr 23 00:00:36.323: INFO: update-demo-nautilus-6p4m6 is created but not running Apr 23 00:00:41.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6938' Apr 23 00:00:41.416: INFO: stderr: "" Apr 23 00:00:41.416: INFO: stdout: "update-demo-nautilus-6p4m6 update-demo-nautilus-9jcrw " Apr 23 00:00:41.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p4m6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:41.522: INFO: stderr: "" Apr 23 00:00:41.522: INFO: stdout: "true" Apr 23 00:00:41.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p4m6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:41.614: INFO: stderr: "" Apr 23 00:00:41.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 00:00:41.614: INFO: validating pod update-demo-nautilus-6p4m6 Apr 23 00:00:41.617: INFO: got data: { "image": "nautilus.jpg" } Apr 23 00:00:41.618: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 00:00:41.618: INFO: update-demo-nautilus-6p4m6 is verified up and running Apr 23 00:00:41.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:41.708: INFO: stderr: "" Apr 23 00:00:41.708: INFO: stdout: "true" Apr 23 00:00:41.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jcrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6938' Apr 23 00:00:41.797: INFO: stderr: "" Apr 23 00:00:41.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 00:00:41.797: INFO: validating pod update-demo-nautilus-9jcrw Apr 23 00:00:41.800: INFO: got data: { "image": "nautilus.jpg" } Apr 23 00:00:41.800: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 00:00:41.800: INFO: update-demo-nautilus-9jcrw is verified up and running STEP: using delete to clean up resources Apr 23 00:00:41.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6938' Apr 23 00:00:41.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:00:41.903: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 23 00:00:41.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6938' Apr 23 00:00:41.998: INFO: stderr: "No resources found in kubectl-6938 namespace.\n" Apr 23 00:00:41.998: INFO: stdout: "" Apr 23 00:00:41.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6938 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 00:00:42.103: INFO: stderr: "" Apr 23 00:00:42.103: INFO: stdout: "update-demo-nautilus-6p4m6\nupdate-demo-nautilus-9jcrw\n" Apr 23 00:00:42.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6938' Apr 23 00:00:42.695: INFO: stderr: "No resources found in kubectl-6938 namespace.\n" Apr 23 00:00:42.695: INFO: stdout: "" Apr 23 00:00:42.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6938 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 00:00:42.792: INFO: stderr: "" Apr 23 00:00:42.792: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:00:42.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6938" for this suite. • [SLOW TEST:25.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":87,"skipped":1296,"failed":0} [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:00:42.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 23 00:00:47.657: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a67d54b3-1bd4-42d1-b549-fb27e627776e" Apr 23 00:00:47.657: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a67d54b3-1bd4-42d1-b549-fb27e627776e" in namespace "pods-5881" to be "terminated due to deadline exceeded" Apr 23 00:00:47.675: INFO: Pod "pod-update-activedeadlineseconds-a67d54b3-1bd4-42d1-b549-fb27e627776e": Phase="Running", Reason="", readiness=true. Elapsed: 17.625325ms Apr 23 00:00:49.684: INFO: Pod "pod-update-activedeadlineseconds-a67d54b3-1bd4-42d1-b549-fb27e627776e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.026962951s Apr 23 00:00:49.684: INFO: Pod "pod-update-activedeadlineseconds-a67d54b3-1bd4-42d1-b549-fb27e627776e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:00:49.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5881" for this suite. • [SLOW TEST:6.891 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1296,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:00:49.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:00:49.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327" in namespace "downward-api-3727" to be "Succeeded or Failed" Apr 23 00:00:49.956: INFO: Pod "downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327": Phase="Pending", Reason="", readiness=false. Elapsed: 54.765617ms Apr 23 00:00:51.960: INFO: Pod "downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058364953s Apr 23 00:00:53.963: INFO: Pod "downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061919671s STEP: Saw pod success Apr 23 00:00:53.963: INFO: Pod "downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327" satisfied condition "Succeeded or Failed" Apr 23 00:00:53.966: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327 container client-container: STEP: delete the pod Apr 23 00:00:53.997: INFO: Waiting for pod downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327 to disappear Apr 23 00:00:54.006: INFO: Pod downwardapi-volume-f6875d22-0e2e-444c-9543-df0bd2156327 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:00:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3727" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:00:54.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9897.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9897.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9897.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9897.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:01:00.110: INFO: DNS probes using dns-9897/dns-test-1fe61dbd-a58a-44ec-9931-2451d787c91a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9897" for this suite. • [SLOW TEST:6.201 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":90,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:00.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:01:00.265: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 23 00:01:03.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 create -f -' Apr 23 00:01:09.025: INFO: stderr: "" Apr 23 00:01:09.025: INFO: stdout: "e2e-test-crd-publish-openapi-1040-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 23 00:01:09.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 delete e2e-test-crd-publish-openapi-1040-crds test-foo' Apr 23 00:01:09.145: INFO: stderr: "" Apr 23 00:01:09.145: INFO: stdout: "e2e-test-crd-publish-openapi-1040-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 23 00:01:09.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 apply -f -' Apr 23 00:01:09.372: INFO: stderr: "" Apr 23 00:01:09.372: INFO: stdout: "e2e-test-crd-publish-openapi-1040-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 23 00:01:09.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 delete e2e-test-crd-publish-openapi-1040-crds test-foo' Apr 23 00:01:09.477: INFO: stderr: "" Apr 23 00:01:09.477: INFO: stdout: "e2e-test-crd-publish-openapi-1040-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 23 00:01:09.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 create -f -' Apr 23 00:01:09.746: INFO: rc: 1 Apr 23 00:01:09.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 apply -f -' Apr 23 00:01:09.958: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 23 00:01:09.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 create -f -' Apr 23 00:01:10.185: INFO: rc: 1 Apr 23 00:01:10.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7536 apply -f -' Apr 23 00:01:10.387: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 23 00:01:10.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1040-crds' Apr 23 00:01:10.622: INFO: stderr: "" Apr 23 00:01:10.622: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1040-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 23 00:01:10.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1040-crds.metadata' Apr 23 00:01:10.862: INFO: stderr: "" Apr 23 00:01:10.862: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1040-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 23 00:01:10.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1040-crds.spec' Apr 23 00:01:11.078: INFO: stderr: "" Apr 23 00:01:11.078: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1040-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 23 00:01:11.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1040-crds.spec.bars' Apr 23 00:01:11.332: INFO: stderr: "" Apr 23 00:01:11.332: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1040-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 23 00:01:11.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1040-crds.spec.bars2' Apr 23 00:01:11.576: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:13.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7536" for this suite. • [SLOW TEST:13.286 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":91,"skipped":1378,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:13.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 23 00:01:13.627: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253400 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:01:13.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253401 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:01:13.627: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253402 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 23 00:01:23.661: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253437 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:01:23.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253438 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:01:23.661: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2391 /api/v1/namespaces/watch-2391/configmaps/e2e-watch-test-label-changed 8314d174-a79e-42e1-9459-0924f1b8171d 10253439 0 2020-04-23 00:01:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:23.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2391" for this suite. • [SLOW TEST:10.169 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":92,"skipped":1385,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:23.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:01:23.751: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:24.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-257" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":93,"skipped":1390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:24.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:24.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-327" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":94,"skipped":1421,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:24.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 23 00:01:29.036: INFO: Successfully updated pod "pod-update-10ddb1aa-f4e5-4ce8-877e-7ad92fe792c3" STEP: verifying the updated pod is in kubernetes Apr 23 00:01:29.045: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:29.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8956" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:29.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8970 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8970 I0423 00:01:29.204530 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8970, replica count: 2 I0423 00:01:32.255011 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:01:35.255248 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 00:01:35.255: INFO: Creating new exec pod Apr 23 00:01:40.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8970 execpod58zj4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 23 00:01:40.488: INFO: stderr: "I0423 00:01:40.391403 1538 log.go:172] (0xc000a3c0b0) (0xc000504aa0) Create stream\nI0423 00:01:40.391450 1538 log.go:172] (0xc000a3c0b0) (0xc000504aa0) Stream added, broadcasting: 1\nI0423 00:01:40.393973 1538 log.go:172] (0xc000a3c0b0) Reply frame received for 1\nI0423 00:01:40.394032 1538 log.go:172] (0xc000a3c0b0) (0xc00095a000) Create stream\nI0423 00:01:40.394052 1538 log.go:172] (0xc000a3c0b0) (0xc00095a000) Stream added, broadcasting: 3\nI0423 00:01:40.394911 1538 log.go:172] (0xc000a3c0b0) Reply frame received for 3\nI0423 00:01:40.394951 1538 log.go:172] (0xc000a3c0b0) (0xc0009a8000) Create stream\nI0423 00:01:40.394965 1538 log.go:172] (0xc000a3c0b0) (0xc0009a8000) Stream added, broadcasting: 5\nI0423 00:01:40.395811 1538 log.go:172] (0xc000a3c0b0) Reply frame received for 5\nI0423 00:01:40.479821 1538 log.go:172] (0xc000a3c0b0) Data frame received for 5\nI0423 00:01:40.479873 1538 log.go:172] (0xc0009a8000) (5) Data frame handling\nI0423 00:01:40.479922 1538 log.go:172] (0xc0009a8000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0423 00:01:40.480254 1538 log.go:172] (0xc000a3c0b0) Data frame received for 5\nI0423 00:01:40.480281 1538 log.go:172] (0xc0009a8000) (5) Data frame handling\nI0423 00:01:40.480320 1538 log.go:172] (0xc0009a8000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0423 00:01:40.480392 1538 log.go:172] (0xc000a3c0b0) Data frame received for 5\nI0423 00:01:40.480425 1538 log.go:172] (0xc0009a8000) (5) Data frame handling\nI0423 00:01:40.480719 1538 log.go:172] (0xc000a3c0b0) Data frame received for 3\nI0423 00:01:40.480744 1538 log.go:172] (0xc00095a000) (3) Data frame handling\nI0423 00:01:40.482830 1538 log.go:172] (0xc000a3c0b0) Data frame received for 1\nI0423 00:01:40.482857 1538 log.go:172] (0xc000504aa0) (1) Data frame handling\nI0423 00:01:40.482882 1538 log.go:172] (0xc000504aa0) (1) Data frame sent\nI0423 00:01:40.482906 1538 log.go:172] (0xc000a3c0b0) (0xc000504aa0) Stream removed, broadcasting: 1\nI0423 00:01:40.482924 1538 log.go:172] (0xc000a3c0b0) Go away received\nI0423 00:01:40.483442 1538 log.go:172] (0xc000a3c0b0) (0xc000504aa0) Stream removed, broadcasting: 1\nI0423 00:01:40.483465 1538 log.go:172] (0xc000a3c0b0) (0xc00095a000) Stream removed, broadcasting: 3\nI0423 00:01:40.483476 1538 log.go:172] (0xc000a3c0b0) (0xc0009a8000) Stream removed, broadcasting: 5\n" Apr 23 00:01:40.488: INFO: stdout: "" Apr 23 00:01:40.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8970 execpod58zj4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.153.121 80' Apr 23 00:01:40.700: INFO: stderr: "I0423 00:01:40.626488 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3680) Create stream\nI0423 00:01:40.626543 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3680) Stream added, broadcasting: 1\nI0423 00:01:40.629071 1558 log.go:172] (0xc000bdcdc0) Reply frame received for 1\nI0423 00:01:40.629266 1558 log.go:172] (0xc000bdcdc0) (0xc0007b8000) Create stream\nI0423 00:01:40.629290 1558 log.go:172] (0xc000bdcdc0) (0xc0007b8000) Stream added, broadcasting: 3\nI0423 00:01:40.630791 1558 log.go:172] (0xc000bdcdc0) Reply frame received for 3\nI0423 00:01:40.630858 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3720) Create stream\nI0423 00:01:40.630907 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3720) Stream added, broadcasting: 5\nI0423 00:01:40.632108 1558 log.go:172] (0xc000bdcdc0) Reply frame received for 5\nI0423 00:01:40.692950 1558 log.go:172] (0xc000bdcdc0) Data frame received for 3\nI0423 00:01:40.692998 1558 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0423 00:01:40.693024 1558 log.go:172] (0xc000bdcdc0) Data frame received for 5\nI0423 00:01:40.693036 1558 log.go:172] (0xc0000c3720) (5) Data frame handling\nI0423 00:01:40.693057 1558 log.go:172] (0xc0000c3720) (5) Data frame sent\nI0423 00:01:40.693071 1558 log.go:172] (0xc000bdcdc0) Data frame received for 5\nI0423 00:01:40.693081 1558 log.go:172] (0xc0000c3720) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.153.121 80\nConnection to 10.96.153.121 80 port [tcp/http] succeeded!\nI0423 00:01:40.695132 1558 log.go:172] (0xc000bdcdc0) Data frame received for 1\nI0423 00:01:40.695175 1558 log.go:172] (0xc0000c3680) (1) Data frame handling\nI0423 00:01:40.695209 1558 log.go:172] (0xc0000c3680) (1) Data frame sent\nI0423 00:01:40.695234 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3680) Stream removed, broadcasting: 1\nI0423 00:01:40.695260 1558 log.go:172] (0xc000bdcdc0) Go away received\nI0423 00:01:40.695712 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3680) Stream removed, broadcasting: 1\nI0423 00:01:40.695748 1558 log.go:172] (0xc000bdcdc0) (0xc0007b8000) Stream removed, broadcasting: 3\nI0423 00:01:40.695770 1558 log.go:172] (0xc000bdcdc0) (0xc0000c3720) Stream removed, broadcasting: 5\n" Apr 23 00:01:40.701: INFO: stdout: "" Apr 23 00:01:40.701: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:01:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8970" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.683 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":96,"skipped":1493,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:01:40.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9485 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9485 STEP: Creating statefulset with conflicting port in namespace statefulset-9485 STEP: Waiting until pod test-pod will start running in namespace statefulset-9485 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9485 Apr 23 00:01:44.849: INFO: Observed stateful pod in namespace: statefulset-9485, name: ss-0, uid: 151f0e0b-3fb8-4239-b703-3e2df3f118a4, status phase: Pending. Waiting for statefulset controller to delete. Apr 23 00:01:45.030: INFO: Observed stateful pod in namespace: statefulset-9485, name: ss-0, uid: 151f0e0b-3fb8-4239-b703-3e2df3f118a4, status phase: Failed. Waiting for statefulset controller to delete. Apr 23 00:01:45.039: INFO: Observed stateful pod in namespace: statefulset-9485, name: ss-0, uid: 151f0e0b-3fb8-4239-b703-3e2df3f118a4, status phase: Failed. Waiting for statefulset controller to delete. Apr 23 00:01:45.064: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9485 STEP: Removing pod with conflicting port in namespace statefulset-9485 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9485 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 23 00:01:49.122: INFO: Deleting all statefulset in ns statefulset-9485 Apr 23 00:01:49.125: INFO: Scaling statefulset ss to 0 Apr 23 00:02:09.176: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:02:09.179: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:02:09.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9485" for this suite. • [SLOW TEST:28.459 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":97,"skipped":1515,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:02:09.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9189 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9189 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9189 Apr 23 00:02:09.305: INFO: Found 0 stateful pods, waiting for 1 Apr 23 00:02:19.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 23 00:02:19.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:02:19.581: INFO: stderr: "I0423 00:02:19.456833 1578 log.go:172] (0xc0003c9d90) (0xc0009a41e0) Create stream\nI0423 00:02:19.456910 1578 log.go:172] (0xc0003c9d90) (0xc0009a41e0) Stream added, broadcasting: 1\nI0423 00:02:19.459718 1578 log.go:172] (0xc0003c9d90) Reply frame received for 1\nI0423 00:02:19.459793 1578 log.go:172] (0xc0003c9d90) (0xc000486be0) Create stream\nI0423 00:02:19.459825 1578 log.go:172] (0xc0003c9d90) (0xc000486be0) Stream added, broadcasting: 3\nI0423 00:02:19.460908 1578 log.go:172] (0xc0003c9d90) Reply frame received for 3\nI0423 00:02:19.460947 1578 log.go:172] (0xc0003c9d90) (0xc0009a4320) Create stream\nI0423 00:02:19.460962 1578 log.go:172] (0xc0003c9d90) (0xc0009a4320) Stream added, broadcasting: 5\nI0423 00:02:19.462031 1578 log.go:172] (0xc0003c9d90) Reply frame received for 5\nI0423 00:02:19.545395 1578 log.go:172] (0xc0003c9d90) Data frame received for 5\nI0423 00:02:19.545427 1578 log.go:172] (0xc0009a4320) (5) Data frame handling\nI0423 00:02:19.545448 1578 log.go:172] (0xc0009a4320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:02:19.574684 1578 log.go:172] (0xc0003c9d90) Data frame received for 3\nI0423 00:02:19.574717 1578 log.go:172] (0xc000486be0) (3) Data frame handling\nI0423 00:02:19.574739 1578 log.go:172] (0xc000486be0) (3) Data frame sent\nI0423 00:02:19.574865 1578 log.go:172] (0xc0003c9d90) Data frame received for 5\nI0423 00:02:19.574889 1578 log.go:172] (0xc0009a4320) (5) Data frame handling\nI0423 00:02:19.574959 1578 log.go:172] (0xc0003c9d90) Data frame received for 3\nI0423 00:02:19.574988 1578 log.go:172] (0xc000486be0) (3) Data frame handling\nI0423 00:02:19.576691 1578 log.go:172] (0xc0003c9d90) Data frame received for 1\nI0423 00:02:19.576712 1578 log.go:172] (0xc0009a41e0) (1) Data frame handling\nI0423 00:02:19.576737 1578 log.go:172] (0xc0009a41e0) (1) Data frame sent\nI0423 00:02:19.576760 1578 log.go:172] (0xc0003c9d90) (0xc0009a41e0) Stream removed, broadcasting: 1\nI0423 00:02:19.576881 1578 log.go:172] (0xc0003c9d90) Go away received\nI0423 00:02:19.577244 1578 log.go:172] (0xc0003c9d90) (0xc0009a41e0) Stream removed, broadcasting: 1\nI0423 00:02:19.577266 1578 log.go:172] (0xc0003c9d90) (0xc000486be0) Stream removed, broadcasting: 3\nI0423 00:02:19.577283 1578 log.go:172] (0xc0003c9d90) (0xc0009a4320) Stream removed, broadcasting: 5\n" Apr 23 00:02:19.581: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:02:19.581: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:02:19.584: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 23 00:02:29.589: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:02:29.589: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:02:29.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999482s Apr 23 00:02:30.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992613807s Apr 23 00:02:31.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98741822s Apr 23 00:02:32.622: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982838646s Apr 23 00:02:33.627: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977571569s Apr 23 00:02:34.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972588164s Apr 23 00:02:35.642: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967086011s Apr 23 00:02:36.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.95667409s Apr 23 00:02:37.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.952485893s Apr 23 00:02:38.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 947.719487ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9189 Apr 23 00:02:39.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:02:39.895: INFO: stderr: "I0423 00:02:39.799129 1598 log.go:172] (0xc000a0fb80) (0xc000980780) Create stream\nI0423 00:02:39.799219 1598 log.go:172] (0xc000a0fb80) (0xc000980780) Stream added, broadcasting: 1\nI0423 00:02:39.804336 1598 log.go:172] (0xc000a0fb80) Reply frame received for 1\nI0423 00:02:39.804406 1598 log.go:172] (0xc000a0fb80) (0xc000625720) Create stream\nI0423 00:02:39.804430 1598 log.go:172] (0xc000a0fb80) (0xc000625720) Stream added, broadcasting: 3\nI0423 00:02:39.805664 1598 log.go:172] (0xc000a0fb80) Reply frame received for 3\nI0423 00:02:39.805708 1598 log.go:172] (0xc000a0fb80) (0xc000aac000) Create stream\nI0423 00:02:39.805720 1598 log.go:172] (0xc000a0fb80) (0xc000aac000) Stream added, broadcasting: 5\nI0423 00:02:39.806654 1598 log.go:172] (0xc000a0fb80) Reply frame received for 5\nI0423 00:02:39.889984 1598 log.go:172] (0xc000a0fb80) Data frame received for 5\nI0423 00:02:39.890040 1598 log.go:172] (0xc000aac000) (5) Data frame handling\nI0423 00:02:39.890066 1598 log.go:172] (0xc000aac000) (5) Data frame sent\nI0423 00:02:39.890082 1598 log.go:172] (0xc000a0fb80) Data frame received for 5\nI0423 00:02:39.890093 1598 log.go:172] (0xc000aac000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:02:39.890120 1598 log.go:172] (0xc000a0fb80) Data frame received for 3\nI0423 00:02:39.890155 1598 log.go:172] (0xc000625720) (3) Data frame handling\nI0423 00:02:39.890187 1598 log.go:172] (0xc000625720) (3) Data frame sent\nI0423 00:02:39.890202 1598 log.go:172] (0xc000a0fb80) Data frame received for 3\nI0423 00:02:39.890213 1598 log.go:172] (0xc000625720) (3) Data frame handling\nI0423 00:02:39.891780 1598 log.go:172] (0xc000a0fb80) Data frame received for 1\nI0423 00:02:39.891797 1598 log.go:172] (0xc000980780) (1) Data frame handling\nI0423 00:02:39.891817 1598 log.go:172] (0xc000980780) (1) Data frame sent\nI0423 00:02:39.891831 1598 log.go:172] (0xc000a0fb80) (0xc000980780) Stream removed, broadcasting: 1\nI0423 00:02:39.891905 1598 log.go:172] (0xc000a0fb80) Go away received\nI0423 00:02:39.892124 1598 log.go:172] (0xc000a0fb80) (0xc000980780) Stream removed, broadcasting: 1\nI0423 00:02:39.892142 1598 log.go:172] (0xc000a0fb80) (0xc000625720) Stream removed, broadcasting: 3\nI0423 00:02:39.892150 1598 log.go:172] (0xc000a0fb80) (0xc000aac000) Stream removed, broadcasting: 5\n" Apr 23 00:02:39.895: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:02:39.895: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:02:39.899: INFO: Found 1 stateful pods, waiting for 3 Apr 23 00:02:49.904: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:02:49.904: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:02:49.904: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 23 00:02:49.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:02:50.142: INFO: stderr: "I0423 00:02:50.047330 1618 log.go:172] (0xc000b1b970) (0xc000ac2a00) Create stream\nI0423 00:02:50.047396 1618 log.go:172] (0xc000b1b970) (0xc000ac2a00) Stream added, broadcasting: 1\nI0423 00:02:50.050071 1618 log.go:172] (0xc000b1b970) Reply frame received for 1\nI0423 00:02:50.050125 1618 log.go:172] (0xc000b1b970) (0xc0009cc5a0) Create stream\nI0423 00:02:50.050624 1618 log.go:172] (0xc000b1b970) (0xc0009cc5a0) Stream added, broadcasting: 3\nI0423 00:02:50.052105 1618 log.go:172] (0xc000b1b970) Reply frame received for 3\nI0423 00:02:50.052381 1618 log.go:172] (0xc000b1b970) (0xc000a320a0) Create stream\nI0423 00:02:50.052418 1618 log.go:172] (0xc000b1b970) (0xc000a320a0) Stream added, broadcasting: 5\nI0423 00:02:50.054403 1618 log.go:172] (0xc000b1b970) Reply frame received for 5\nI0423 00:02:50.134370 1618 log.go:172] (0xc000b1b970) Data frame received for 3\nI0423 00:02:50.134412 1618 log.go:172] (0xc0009cc5a0) (3) Data frame handling\nI0423 00:02:50.134425 1618 log.go:172] (0xc0009cc5a0) (3) Data frame sent\nI0423 00:02:50.134434 1618 log.go:172] (0xc000b1b970) Data frame received for 3\nI0423 00:02:50.134442 1618 log.go:172] (0xc0009cc5a0) (3) Data frame handling\nI0423 00:02:50.134478 1618 log.go:172] (0xc000b1b970) Data frame received for 5\nI0423 00:02:50.134516 1618 log.go:172] (0xc000a320a0) (5) Data frame handling\nI0423 00:02:50.134551 1618 log.go:172] (0xc000a320a0) (5) Data frame sent\nI0423 00:02:50.134572 1618 log.go:172] (0xc000b1b970) Data frame received for 5\nI0423 00:02:50.134588 1618 log.go:172] (0xc000a320a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:02:50.136145 1618 log.go:172] (0xc000b1b970) Data frame received for 1\nI0423 00:02:50.136177 1618 log.go:172] (0xc000ac2a00) (1) Data frame handling\nI0423 00:02:50.136199 1618 log.go:172] (0xc000ac2a00) (1) Data frame sent\nI0423 00:02:50.136235 1618 log.go:172] (0xc000b1b970) (0xc000ac2a00) Stream removed, broadcasting: 1\nI0423 00:02:50.136263 1618 log.go:172] (0xc000b1b970) Go away received\nI0423 00:02:50.136694 1618 log.go:172] (0xc000b1b970) (0xc000ac2a00) Stream removed, broadcasting: 1\nI0423 00:02:50.136717 1618 log.go:172] (0xc000b1b970) (0xc0009cc5a0) Stream removed, broadcasting: 3\nI0423 00:02:50.136729 1618 log.go:172] (0xc000b1b970) (0xc000a320a0) Stream removed, broadcasting: 5\n" Apr 23 00:02:50.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:02:50.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:02:50.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:02:50.383: INFO: stderr: "I0423 00:02:50.284097 1640 log.go:172] (0xc000bd0160) (0xc00056ebe0) Create stream\nI0423 00:02:50.284164 1640 log.go:172] (0xc000bd0160) (0xc00056ebe0) Stream added, broadcasting: 1\nI0423 00:02:50.286750 1640 log.go:172] (0xc000bd0160) Reply frame received for 1\nI0423 00:02:50.286792 1640 log.go:172] (0xc000bd0160) (0xc00097e000) Create stream\nI0423 00:02:50.286811 1640 log.go:172] (0xc000bd0160) (0xc00097e000) Stream added, broadcasting: 3\nI0423 00:02:50.287910 1640 log.go:172] (0xc000bd0160) Reply frame received for 3\nI0423 00:02:50.287964 1640 log.go:172] (0xc000bd0160) (0xc00097e0a0) Create stream\nI0423 00:02:50.287978 1640 log.go:172] (0xc000bd0160) (0xc00097e0a0) Stream added, broadcasting: 5\nI0423 00:02:50.289079 1640 log.go:172] (0xc000bd0160) Reply frame received for 5\nI0423 00:02:50.349885 1640 log.go:172] (0xc000bd0160) Data frame received for 5\nI0423 00:02:50.349914 1640 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0423 00:02:50.349936 1640 log.go:172] (0xc00097e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:02:50.376729 1640 log.go:172] (0xc000bd0160) Data frame received for 3\nI0423 00:02:50.376776 1640 log.go:172] (0xc00097e000) (3) Data frame handling\nI0423 00:02:50.376809 1640 log.go:172] (0xc00097e000) (3) Data frame sent\nI0423 00:02:50.376830 1640 log.go:172] (0xc000bd0160) Data frame received for 3\nI0423 00:02:50.376851 1640 log.go:172] (0xc00097e000) (3) Data frame handling\nI0423 00:02:50.376994 1640 log.go:172] (0xc000bd0160) Data frame received for 5\nI0423 00:02:50.377027 1640 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0423 00:02:50.378763 1640 log.go:172] (0xc000bd0160) Data frame received for 1\nI0423 00:02:50.378780 1640 log.go:172] (0xc00056ebe0) (1) Data frame handling\nI0423 00:02:50.378797 1640 log.go:172] (0xc00056ebe0) (1) Data frame sent\nI0423 00:02:50.378813 1640 log.go:172] (0xc000bd0160) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0423 00:02:50.379176 1640 log.go:172] (0xc000bd0160) Go away received\nI0423 00:02:50.379269 1640 log.go:172] (0xc000bd0160) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0423 00:02:50.379295 1640 log.go:172] (0xc000bd0160) (0xc00097e000) Stream removed, broadcasting: 3\nI0423 00:02:50.379308 1640 log.go:172] (0xc000bd0160) (0xc00097e0a0) Stream removed, broadcasting: 5\n" Apr 23 00:02:50.383: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:02:50.383: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:02:50.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:02:50.624: INFO: stderr: "I0423 00:02:50.508682 1663 log.go:172] (0xc00003ab00) (0xc000b24000) Create stream\nI0423 00:02:50.508744 1663 log.go:172] (0xc00003ab00) (0xc000b24000) Stream added, broadcasting: 1\nI0423 00:02:50.513988 1663 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0423 00:02:50.514129 1663 log.go:172] (0xc00003ab00) (0xc0009a4000) Create stream\nI0423 00:02:50.514155 1663 log.go:172] (0xc00003ab00) (0xc0009a4000) Stream added, broadcasting: 3\nI0423 00:02:50.522081 1663 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0423 00:02:50.522112 1663 log.go:172] (0xc00003ab00) (0xc00069f400) Create stream\nI0423 00:02:50.522119 1663 log.go:172] (0xc00003ab00) (0xc00069f400) Stream added, broadcasting: 5\nI0423 00:02:50.523745 1663 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0423 00:02:50.584972 1663 log.go:172] (0xc00003ab00) Data frame received for 5\nI0423 00:02:50.585006 1663 log.go:172] (0xc00069f400) (5) Data frame handling\nI0423 00:02:50.585027 1663 log.go:172] (0xc00069f400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:02:50.616680 1663 log.go:172] (0xc00003ab00) Data frame received for 3\nI0423 00:02:50.616706 1663 log.go:172] (0xc0009a4000) (3) Data frame handling\nI0423 00:02:50.616728 1663 log.go:172] (0xc0009a4000) (3) Data frame sent\nI0423 00:02:50.616739 1663 log.go:172] (0xc00003ab00) Data frame received for 3\nI0423 00:02:50.616747 1663 log.go:172] (0xc0009a4000) (3) Data frame handling\nI0423 00:02:50.616873 1663 log.go:172] (0xc00003ab00) Data frame received for 5\nI0423 00:02:50.616892 1663 log.go:172] (0xc00069f400) (5) Data frame handling\nI0423 00:02:50.618553 1663 log.go:172] (0xc00003ab00) Data frame received for 1\nI0423 00:02:50.618595 1663 log.go:172] (0xc000b24000) (1) Data frame handling\nI0423 00:02:50.618627 1663 log.go:172] (0xc000b24000) (1) Data frame sent\nI0423 00:02:50.618667 1663 log.go:172] (0xc00003ab00) (0xc000b24000) Stream removed, broadcasting: 1\nI0423 00:02:50.618695 1663 log.go:172] (0xc00003ab00) Go away received\nI0423 00:02:50.619133 1663 log.go:172] (0xc00003ab00) (0xc000b24000) Stream removed, broadcasting: 1\nI0423 00:02:50.619163 1663 log.go:172] (0xc00003ab00) (0xc0009a4000) Stream removed, broadcasting: 3\nI0423 00:02:50.619185 1663 log.go:172] (0xc00003ab00) (0xc00069f400) Stream removed, broadcasting: 5\n" Apr 23 00:02:50.624: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:02:50.624: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:02:50.624: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:02:50.627: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 23 00:03:00.635: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:03:00.635: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:03:00.635: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:03:00.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999493s Apr 23 00:03:01.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992380608s Apr 23 00:03:02.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98696153s Apr 23 00:03:03.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982041709s Apr 23 00:03:04.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977550224s Apr 23 00:03:05.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972274829s Apr 23 00:03:06.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940409573s Apr 23 00:03:07.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.935665557s Apr 23 00:03:08.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.930842434s Apr 23 00:03:09.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.850139ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9189 Apr 23 00:03:10.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:03:10.975: INFO: stderr: "I0423 00:03:10.877410 1684 log.go:172] (0xc000b1f340) (0xc000950500) Create stream\nI0423 00:03:10.877461 1684 log.go:172] (0xc000b1f340) (0xc000950500) Stream added, broadcasting: 1\nI0423 00:03:10.881738 1684 log.go:172] (0xc000b1f340) Reply frame received for 1\nI0423 00:03:10.881771 1684 log.go:172] (0xc000b1f340) (0xc0005e57c0) Create stream\nI0423 00:03:10.881777 1684 log.go:172] (0xc000b1f340) (0xc0005e57c0) Stream added, broadcasting: 3\nI0423 00:03:10.882603 1684 log.go:172] (0xc000b1f340) Reply frame received for 3\nI0423 00:03:10.882653 1684 log.go:172] (0xc000b1f340) (0xc00049ebe0) Create stream\nI0423 00:03:10.882669 1684 log.go:172] (0xc000b1f340) (0xc00049ebe0) Stream added, broadcasting: 5\nI0423 00:03:10.883624 1684 log.go:172] (0xc000b1f340) Reply frame received for 5\nI0423 00:03:10.969551 1684 log.go:172] (0xc000b1f340) Data frame received for 3\nI0423 00:03:10.969608 1684 log.go:172] (0xc0005e57c0) (3) Data frame handling\nI0423 00:03:10.969627 1684 log.go:172] (0xc0005e57c0) (3) Data frame sent\nI0423 00:03:10.969639 1684 log.go:172] (0xc000b1f340) Data frame received for 3\nI0423 00:03:10.969650 1684 log.go:172] (0xc0005e57c0) (3) Data frame handling\nI0423 00:03:10.969727 1684 log.go:172] (0xc000b1f340) Data frame received for 5\nI0423 00:03:10.969766 1684 log.go:172] (0xc00049ebe0) (5) Data frame handling\nI0423 00:03:10.969791 1684 log.go:172] (0xc00049ebe0) (5) Data frame sent\nI0423 00:03:10.969803 1684 log.go:172] (0xc000b1f340) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:03:10.969813 1684 log.go:172] (0xc00049ebe0) (5) Data frame handling\nI0423 00:03:10.970605 1684 log.go:172] (0xc000b1f340) Data frame received for 1\nI0423 00:03:10.970637 1684 log.go:172] (0xc000950500) (1) Data frame handling\nI0423 00:03:10.970666 1684 log.go:172] (0xc000950500) (1) Data frame sent\nI0423 00:03:10.970686 1684 log.go:172] (0xc000b1f340) (0xc000950500) Stream removed, broadcasting: 1\nI0423 00:03:10.970706 1684 log.go:172] (0xc000b1f340) Go away received\nI0423 00:03:10.971229 1684 log.go:172] (0xc000b1f340) (0xc000950500) Stream removed, broadcasting: 1\nI0423 00:03:10.971254 1684 log.go:172] (0xc000b1f340) (0xc0005e57c0) Stream removed, broadcasting: 3\nI0423 00:03:10.971267 1684 log.go:172] (0xc000b1f340) (0xc00049ebe0) Stream removed, broadcasting: 5\n" Apr 23 00:03:10.975: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:03:10.975: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:03:10.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:03:11.218: INFO: stderr: "I0423 00:03:11.154756 1703 log.go:172] (0xc000a0e840) (0xc0007c1180) Create stream\nI0423 00:03:11.154811 1703 log.go:172] (0xc000a0e840) (0xc0007c1180) Stream added, broadcasting: 1\nI0423 00:03:11.157862 1703 log.go:172] (0xc000a0e840) Reply frame received for 1\nI0423 00:03:11.157895 1703 log.go:172] (0xc000a0e840) (0xc000a62000) Create stream\nI0423 00:03:11.157904 1703 log.go:172] (0xc000a0e840) (0xc000a62000) Stream added, broadcasting: 3\nI0423 00:03:11.159032 1703 log.go:172] (0xc000a0e840) Reply frame received for 3\nI0423 00:03:11.159095 1703 log.go:172] (0xc000a0e840) (0xc0007c1360) Create stream\nI0423 00:03:11.159129 1703 log.go:172] (0xc000a0e840) (0xc0007c1360) Stream added, broadcasting: 5\nI0423 00:03:11.160219 1703 log.go:172] (0xc000a0e840) Reply frame received for 5\nI0423 00:03:11.212269 1703 log.go:172] (0xc000a0e840) Data frame received for 5\nI0423 00:03:11.212320 1703 log.go:172] (0xc0007c1360) (5) Data frame handling\nI0423 00:03:11.212339 1703 log.go:172] (0xc0007c1360) (5) Data frame sent\nI0423 00:03:11.212353 1703 log.go:172] (0xc000a0e840) Data frame received for 5\nI0423 00:03:11.212365 1703 log.go:172] (0xc0007c1360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:03:11.212393 1703 log.go:172] (0xc000a0e840) Data frame received for 3\nI0423 00:03:11.212408 1703 log.go:172] (0xc000a62000) (3) Data frame handling\nI0423 00:03:11.212421 1703 log.go:172] (0xc000a62000) (3) Data frame sent\nI0423 00:03:11.212433 1703 log.go:172] (0xc000a0e840) Data frame received for 3\nI0423 00:03:11.212445 1703 log.go:172] (0xc000a62000) (3) Data frame handling\nI0423 00:03:11.214145 1703 log.go:172] (0xc000a0e840) Data frame received for 1\nI0423 00:03:11.214167 1703 log.go:172] (0xc0007c1180) (1) Data frame handling\nI0423 00:03:11.214180 1703 log.go:172] (0xc0007c1180) (1) Data frame sent\nI0423 00:03:11.214195 1703 log.go:172] (0xc000a0e840) (0xc0007c1180) Stream removed, broadcasting: 1\nI0423 00:03:11.214218 1703 log.go:172] (0xc000a0e840) Go away received\nI0423 00:03:11.214617 1703 log.go:172] (0xc000a0e840) (0xc0007c1180) Stream removed, broadcasting: 1\nI0423 00:03:11.214636 1703 log.go:172] (0xc000a0e840) (0xc000a62000) Stream removed, broadcasting: 3\nI0423 00:03:11.214644 1703 log.go:172] (0xc000a0e840) (0xc0007c1360) Stream removed, broadcasting: 5\n" Apr 23 00:03:11.219: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:03:11.219: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:03:11.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:03:11.408: INFO: stderr: "I0423 00:03:11.328003 1724 log.go:172] (0xc00003b810) (0xc0005c75e0) Create stream\nI0423 00:03:11.328050 1724 log.go:172] (0xc00003b810) (0xc0005c75e0) Stream added, broadcasting: 1\nI0423 00:03:11.330008 1724 log.go:172] (0xc00003b810) Reply frame received for 1\nI0423 00:03:11.330040 1724 log.go:172] (0xc00003b810) (0xc000994000) Create stream\nI0423 00:03:11.330049 1724 log.go:172] (0xc00003b810) (0xc000994000) Stream added, broadcasting: 3\nI0423 00:03:11.330790 1724 log.go:172] (0xc00003b810) Reply frame received for 3\nI0423 00:03:11.330825 1724 log.go:172] (0xc00003b810) (0xc0005c77c0) Create stream\nI0423 00:03:11.330838 1724 log.go:172] (0xc00003b810) (0xc0005c77c0) Stream added, broadcasting: 5\nI0423 00:03:11.331541 1724 log.go:172] (0xc00003b810) Reply frame received for 5\nI0423 00:03:11.401524 1724 log.go:172] (0xc00003b810) Data frame received for 3\nI0423 00:03:11.401589 1724 log.go:172] (0xc000994000) (3) Data frame handling\nI0423 00:03:11.401635 1724 log.go:172] (0xc000994000) (3) Data frame sent\nI0423 00:03:11.401669 1724 log.go:172] (0xc00003b810) Data frame received for 3\nI0423 00:03:11.401698 1724 log.go:172] (0xc000994000) (3) Data frame handling\nI0423 00:03:11.401726 1724 log.go:172] (0xc00003b810) Data frame received for 5\nI0423 00:03:11.401740 1724 log.go:172] (0xc0005c77c0) (5) Data frame handling\nI0423 00:03:11.401753 1724 log.go:172] (0xc0005c77c0) (5) Data frame sent\nI0423 00:03:11.401768 1724 log.go:172] (0xc00003b810) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:03:11.401797 1724 log.go:172] (0xc0005c77c0) (5) Data frame handling\nI0423 00:03:11.403426 1724 log.go:172] (0xc00003b810) Data frame received for 1\nI0423 00:03:11.403457 1724 log.go:172] (0xc0005c75e0) (1) Data frame handling\nI0423 00:03:11.403495 1724 log.go:172] (0xc0005c75e0) (1) Data frame sent\nI0423 00:03:11.403520 1724 log.go:172] (0xc00003b810) (0xc0005c75e0) Stream removed, broadcasting: 1\nI0423 00:03:11.403539 1724 log.go:172] (0xc00003b810) Go away received\nI0423 00:03:11.403967 1724 log.go:172] (0xc00003b810) (0xc0005c75e0) Stream removed, broadcasting: 1\nI0423 00:03:11.403991 1724 log.go:172] (0xc00003b810) (0xc000994000) Stream removed, broadcasting: 3\nI0423 00:03:11.404001 1724 log.go:172] (0xc00003b810) (0xc0005c77c0) Stream removed, broadcasting: 5\n" Apr 23 00:03:11.409: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:03:11.409: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:03:11.409: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 23 00:03:41.424: INFO: Deleting all statefulset in ns statefulset-9189 Apr 23 00:03:41.428: INFO: Scaling statefulset ss to 0 Apr 23 00:03:41.437: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:03:41.439: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:03:41.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9189" for this suite. • [SLOW TEST:92.264 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":98,"skipped":1529,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:03:41.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 23 00:03:41.503: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 23 00:03:41.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:41.839: INFO: stderr: "" Apr 23 00:03:41.839: INFO: stdout: "service/agnhost-slave created\n" Apr 23 00:03:41.839: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 23 00:03:41.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:42.117: INFO: stderr: "" Apr 23 00:03:42.117: INFO: stdout: "service/agnhost-master created\n" Apr 23 00:03:42.118: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 23 00:03:42.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:42.365: INFO: stderr: "" Apr 23 00:03:42.365: INFO: stdout: "service/frontend created\n" Apr 23 00:03:42.365: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 23 00:03:42.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:42.623: INFO: stderr: "" Apr 23 00:03:42.623: INFO: stdout: "deployment.apps/frontend created\n" Apr 23 00:03:42.623: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 23 00:03:42.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:43.293: INFO: stderr: "" Apr 23 00:03:43.293: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 23 00:03:43.293: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 23 00:03:43.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3597' Apr 23 00:03:43.545: INFO: stderr: "" Apr 23 00:03:43.545: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 23 00:03:43.545: INFO: Waiting for all frontend pods to be Running. Apr 23 00:03:53.595: INFO: Waiting for frontend to serve content. Apr 23 00:03:53.607: INFO: Trying to add a new entry to the guestbook. Apr 23 00:03:53.616: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 23 00:03:53.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:53.778: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:53.778: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 23 00:03:53.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:53.964: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:53.964: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 23 00:03:53.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:54.111: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:54.111: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 23 00:03:54.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:54.225: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:54.225: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 23 00:03:54.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:54.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:54.356: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 23 00:03:54.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3597' Apr 23 00:03:54.784: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 00:03:54.784: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:03:54.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3597" for this suite. • [SLOW TEST:13.453 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":99,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:03:54.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 23 00:04:03.379: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:03.387: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 00:04:05.387: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:05.408: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 00:04:07.387: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:07.391: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 00:04:09.387: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:09.391: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 00:04:11.387: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:11.391: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 00:04:13.387: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 00:04:13.391: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:04:13.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-297" for this suite. • [SLOW TEST:18.505 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:04:13.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:04:13.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6" in namespace "projected-1902" to be "Succeeded or Failed" Apr 23 00:04:13.516: INFO: Pod "downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.104272ms Apr 23 00:04:15.519: INFO: Pod "downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037827s Apr 23 00:04:17.526: INFO: Pod "downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04426964s STEP: Saw pod success Apr 23 00:04:17.526: INFO: Pod "downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6" satisfied condition "Succeeded or Failed" Apr 23 00:04:17.529: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6 container client-container: STEP: delete the pod Apr 23 00:04:17.575: INFO: Waiting for pod downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6 to disappear Apr 23 00:04:17.599: INFO: Pod downwardapi-volume-44073e67-90f1-422c-947e-4e2529e7d5f6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:04:17.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1902" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1616,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:04:17.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:04:33.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3572" for this suite. • [SLOW TEST:16.434 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":102,"skipped":1619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:04:34.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fv7t2 in namespace proxy-7520 I0423 00:04:34.137041 7 runners.go:190] Created replication controller with name: proxy-service-fv7t2, namespace: proxy-7520, replica count: 1 I0423 00:04:35.187643 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:04:36.187904 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:04:37.188191 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:04:38.188434 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:39.188635 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:40.188816 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:41.189062 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:42.189309 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:43.189503 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:44.189779 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:45.190056 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 00:04:46.190310 7 runners.go:190] proxy-service-fv7t2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 00:04:46.197: INFO: setup took 12.095865032s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 23 00:04:46.208: INFO: (0) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 10.42323ms) Apr 23 00:04:46.208: INFO: (0) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 10.474399ms) Apr 23 00:04:46.208: INFO: (0) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 10.371376ms) Apr 23 00:04:46.208: INFO: (0) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 10.439605ms) Apr 23 00:04:46.209: INFO: (0) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 11.13558ms) Apr 23 00:04:46.209: INFO: (0) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 11.606136ms) Apr 23 00:04:46.210: INFO: (0) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 12.556689ms) Apr 23 00:04:46.210: INFO: (0) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 12.708913ms) Apr 23 00:04:46.210: INFO: (0) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 12.843963ms) Apr 23 00:04:46.211: INFO: (0) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 13.087089ms) Apr 23 00:04:46.211: INFO: (0) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 13.238117ms) Apr 23 00:04:46.214: INFO: (0) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 5.120481ms) Apr 23 00:04:46.222: INFO: (1) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.197162ms) Apr 23 00:04:46.222: INFO: (1) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 5.163456ms) Apr 23 00:04:46.222: INFO: (1) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 5.203484ms) Apr 23 00:04:46.222: INFO: (1) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 5.413076ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 5.380505ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 5.490287ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.473919ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 5.393695ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 5.41365ms) Apr 23 00:04:46.223: INFO: (1) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 5.545256ms) Apr 23 00:04:46.226: INFO: (2) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.141571ms) Apr 23 00:04:46.226: INFO: (2) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.271382ms) Apr 23 00:04:46.226: INFO: (2) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.445614ms) Apr 23 00:04:46.226: INFO: (2) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 3.509584ms) Apr 23 00:04:46.227: INFO: (2) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 4.288061ms) Apr 23 00:04:46.227: INFO: (2) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 4.248698ms) Apr 23 00:04:46.227: INFO: (2) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 4.265075ms) Apr 23 00:04:46.227: INFO: (2) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.272982ms) Apr 23 00:04:46.247: INFO: (2) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 24.157909ms) Apr 23 00:04:46.247: INFO: (2) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 24.492097ms) Apr 23 00:04:46.247: INFO: (2) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 24.767581ms) Apr 23 00:04:46.247: INFO: (2) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 24.673216ms) Apr 23 00:04:46.247: INFO: (2) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 24.694675ms) Apr 23 00:04:46.248: INFO: (2) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 24.917546ms) Apr 23 00:04:46.248: INFO: (2) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 25.538249ms) Apr 23 00:04:46.252: INFO: (3) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 3.833254ms) Apr 23 00:04:46.254: INFO: (3) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 6.03812ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 9.363972ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 9.800387ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 9.82862ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 10.004243ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 10.116832ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 10.032485ms) Apr 23 00:04:46.258: INFO: (3) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 10.112644ms) Apr 23 00:04:46.259: INFO: (3) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test (200; 5.223426ms) Apr 23 00:04:46.266: INFO: (4) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 5.211519ms) Apr 23 00:04:46.267: INFO: (4) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 5.563337ms) Apr 23 00:04:46.267: INFO: (4) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 5.543259ms) Apr 23 00:04:46.267: INFO: (4) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 5.532626ms) Apr 23 00:04:46.267: INFO: (4) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 5.569183ms) Apr 23 00:04:46.270: INFO: (5) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.486325ms) Apr 23 00:04:46.270: INFO: (5) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 3.499891ms) Apr 23 00:04:46.270: INFO: (5) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.584369ms) Apr 23 00:04:46.270: INFO: (5) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 3.524521ms) Apr 23 00:04:46.271: INFO: (5) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 3.72481ms) Apr 23 00:04:46.271: INFO: (5) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.7267ms) Apr 23 00:04:46.271: INFO: (5) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.775867ms) Apr 23 00:04:46.271: INFO: (5) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 3.740743ms) Apr 23 00:04:46.271: INFO: (5) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 3.756845ms) Apr 23 00:04:46.272: INFO: (5) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 5.282782ms) Apr 23 00:04:46.273: INFO: (5) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 5.568885ms) Apr 23 00:04:46.273: INFO: (5) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.577718ms) Apr 23 00:04:46.273: INFO: (5) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 5.650415ms) Apr 23 00:04:46.273: INFO: (5) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.801739ms) Apr 23 00:04:46.273: INFO: (5) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 6.013819ms) Apr 23 00:04:46.277: INFO: (6) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.883874ms) Apr 23 00:04:46.277: INFO: (6) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test (200; 8.12432ms) Apr 23 00:04:46.281: INFO: (6) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 8.198605ms) Apr 23 00:04:46.281: INFO: (6) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 8.171562ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 9.693053ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 9.775393ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 9.799813ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 9.902737ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 9.917688ms) Apr 23 00:04:46.283: INFO: (6) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 10.069554ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 3.618283ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.538984ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 3.943808ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 3.958512ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.048324ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.938882ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.010047ms) Apr 23 00:04:46.287: INFO: (7) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 4.033172ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 4.3751ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 4.521482ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 4.645071ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.605617ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 4.700634ms) Apr 23 00:04:46.288: INFO: (7) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 4.778437ms) Apr 23 00:04:46.291: INFO: (8) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 3.678711ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.830594ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 3.834789ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 4.149405ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.127267ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.154992ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 4.481343ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 4.430088ms) Apr 23 00:04:46.292: INFO: (8) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 4.444887ms) Apr 23 00:04:46.293: INFO: (8) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 4.416441ms) Apr 23 00:04:46.293: INFO: (8) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 4.45202ms) Apr 23 00:04:46.293: INFO: (8) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.531726ms) Apr 23 00:04:46.293: INFO: (8) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.858894ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.77708ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 5.013938ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 5.092133ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 5.03105ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 5.347612ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 5.382314ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 5.355002ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 5.364336ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 5.423604ms) Apr 23 00:04:46.298: INFO: (9) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 5.395471ms) Apr 23 00:04:46.299: INFO: (9) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.451981ms) Apr 23 00:04:46.299: INFO: (9) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 5.725562ms) Apr 23 00:04:46.299: INFO: (9) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 5.734265ms) Apr 23 00:04:46.299: INFO: (9) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.883945ms) Apr 23 00:04:46.302: INFO: (10) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.429867ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 3.574659ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.514817ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 3.604073ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 3.641549ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 3.793471ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 4.317696ms) Apr 23 00:04:46.303: INFO: (10) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.346587ms) Apr 23 00:04:46.304: INFO: (10) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 4.503879ms) Apr 23 00:04:46.304: INFO: (10) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.506956ms) Apr 23 00:04:46.304: INFO: (10) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.843903ms) Apr 23 00:04:46.304: INFO: (10) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 4.865977ms) Apr 23 00:04:46.304: INFO: (10) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 4.845084ms) Apr 23 00:04:46.305: INFO: (10) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 6.012993ms) Apr 23 00:04:46.305: INFO: (10) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 4.574289ms) Apr 23 00:04:46.310: INFO: (11) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 4.671308ms) Apr 23 00:04:46.310: INFO: (11) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 4.58056ms) Apr 23 00:04:46.310: INFO: (11) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.571084ms) Apr 23 00:04:46.310: INFO: (11) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.569012ms) Apr 23 00:04:46.310: INFO: (11) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test (200; 6.300941ms) Apr 23 00:04:46.312: INFO: (11) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 6.33283ms) Apr 23 00:04:46.312: INFO: (11) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 6.428903ms) Apr 23 00:04:46.312: INFO: (11) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 6.394106ms) Apr 23 00:04:46.312: INFO: (11) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 6.422705ms) Apr 23 00:04:46.315: INFO: (12) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 3.671848ms) Apr 23 00:04:46.315: INFO: (12) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.693863ms) Apr 23 00:04:46.315: INFO: (12) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 3.70443ms) Apr 23 00:04:46.316: INFO: (12) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.0246ms) Apr 23 00:04:46.316: INFO: (12) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 4.259928ms) Apr 23 00:04:46.316: INFO: (12) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 4.310593ms) Apr 23 00:04:46.316: INFO: (12) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.368296ms) Apr 23 00:04:46.316: INFO: (12) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test (200; 3.624885ms) Apr 23 00:04:46.321: INFO: (13) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 3.636558ms) Apr 23 00:04:46.321: INFO: (13) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 3.762612ms) Apr 23 00:04:46.321: INFO: (13) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.906041ms) Apr 23 00:04:46.322: INFO: (13) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 3.992967ms) Apr 23 00:04:46.322: INFO: (13) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.064373ms) Apr 23 00:04:46.322: INFO: (13) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.159825ms) Apr 23 00:04:46.322: INFO: (13) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 4.192032ms) Apr 23 00:04:46.322: INFO: (13) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.588843ms) Apr 23 00:04:46.323: INFO: (13) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 5.395333ms) Apr 23 00:04:46.323: INFO: (13) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 5.489456ms) Apr 23 00:04:46.323: INFO: (13) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.507693ms) Apr 23 00:04:46.323: INFO: (13) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.588035ms) Apr 23 00:04:46.323: INFO: (13) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 5.54528ms) Apr 23 00:04:46.325: INFO: (14) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test (200; 5.887919ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 5.976435ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 6.095173ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 6.14378ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 6.134387ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 6.180416ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 6.223357ms) Apr 23 00:04:46.329: INFO: (14) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 6.177719ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 6.547283ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 6.483085ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 6.569413ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 6.544236ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 6.551182ms) Apr 23 00:04:46.330: INFO: (14) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 6.543833ms) Apr 23 00:04:46.332: INFO: (15) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 2.18533ms) Apr 23 00:04:46.334: INFO: (15) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.196325ms) Apr 23 00:04:46.334: INFO: (15) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 4.489195ms) Apr 23 00:04:46.334: INFO: (15) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 4.498749ms) Apr 23 00:04:46.335: INFO: (15) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 4.594074ms) Apr 23 00:04:46.335: INFO: (15) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 4.318278ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 4.385217ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.39624ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 4.430379ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 4.357793ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 4.361049ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 4.397976ms) Apr 23 00:04:46.347: INFO: (16) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.394394ms) Apr 23 00:04:46.355: INFO: (16) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 12.693703ms) Apr 23 00:04:46.355: INFO: (16) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 12.853481ms) Apr 23 00:04:46.355: INFO: (16) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 12.958938ms) Apr 23 00:04:46.356: INFO: (16) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 13.251648ms) Apr 23 00:04:46.356: INFO: (16) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 13.27878ms) Apr 23 00:04:46.356: INFO: (16) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 4.718898ms) Apr 23 00:04:46.360: INFO: (17) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 4.679827ms) Apr 23 00:04:46.361: INFO: (17) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 4.81379ms) Apr 23 00:04:46.361: INFO: (17) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.919457ms) Apr 23 00:04:46.361: INFO: (17) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 5.016946ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 5.678792ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.748823ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname2/proxy/: bar (200; 5.81601ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.828777ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 5.849175ms) Apr 23 00:04:46.362: INFO: (17) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 5.951712ms) Apr 23 00:04:46.364: INFO: (18) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:460/proxy/: tls baz (200; 2.146394ms) Apr 23 00:04:46.364: INFO: (18) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:1080/proxy/: test<... (200; 2.4057ms) Apr 23 00:04:46.364: INFO: (18) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:462/proxy/: tls qux (200; 2.478856ms) Apr 23 00:04:46.364: INFO: (18) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 2.417285ms) Apr 23 00:04:46.365: INFO: (18) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 2.789656ms) Apr 23 00:04:46.365: INFO: (18) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 2.902463ms) Apr 23 00:04:46.365: INFO: (18) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 3.400543ms) Apr 23 00:04:46.365: INFO: (18) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname2/proxy/: bar (200; 3.58633ms) Apr 23 00:04:46.365: INFO: (18) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:1080/proxy/: ... (200; 3.555424ms) Apr 23 00:04:46.366: INFO: (18) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7:162/proxy/: bar (200; 3.852269ms) Apr 23 00:04:46.366: INFO: (18) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: ... (200; 2.626373ms) Apr 23 00:04:46.370: INFO: (19) /api/v1/namespaces/proxy-7520/pods/https:proxy-service-fv7t2-tjvb7:443/proxy/: test<... (200; 4.325403ms) Apr 23 00:04:46.371: INFO: (19) /api/v1/namespaces/proxy-7520/pods/http:proxy-service-fv7t2-tjvb7:160/proxy/: foo (200; 4.374729ms) Apr 23 00:04:46.371: INFO: (19) /api/v1/namespaces/proxy-7520/services/proxy-service-fv7t2:portname1/proxy/: foo (200; 4.360995ms) Apr 23 00:04:46.372: INFO: (19) /api/v1/namespaces/proxy-7520/pods/proxy-service-fv7t2-tjvb7/proxy/: test (200; 4.563974ms) Apr 23 00:04:46.372: INFO: (19) /api/v1/namespaces/proxy-7520/services/http:proxy-service-fv7t2:portname1/proxy/: foo (200; 4.695106ms) Apr 23 00:04:46.372: INFO: (19) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname1/proxy/: tls baz (200; 5.095305ms) Apr 23 00:04:46.372: INFO: (19) /api/v1/namespaces/proxy-7520/services/https:proxy-service-fv7t2:tlsportname2/proxy/: tls qux (200; 5.097454ms) STEP: deleting ReplicationController proxy-service-fv7t2 in namespace proxy-7520, will wait for the garbage collector to delete the pods Apr 23 00:04:46.431: INFO: Deleting ReplicationController proxy-service-fv7t2 took: 6.593093ms Apr 23 00:04:46.731: INFO: Terminating ReplicationController proxy-service-fv7t2 pods took: 300.220806ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:04:49.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7520" for this suite. • [SLOW TEST:15.294 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":103,"skipped":1745,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:04:49.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 23 00:04:49.387: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 23 00:04:50.098: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 23 00:04:52.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197090, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197090, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197090, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197090, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:04:54.850: INFO: Waited 619.891874ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:04:55.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8288" for this suite. • [SLOW TEST:6.063 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":104,"skipped":1749,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:04:55.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:04:55.516: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 23 00:05:00.534: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 00:05:00.534: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 23 00:05:02.538: INFO: Creating deployment "test-rollover-deployment" Apr 23 00:05:02.546: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 23 00:05:04.553: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 23 00:05:04.559: INFO: Ensure that both replica sets have 1 created replica Apr 23 00:05:04.564: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 23 00:05:04.570: INFO: Updating deployment test-rollover-deployment Apr 23 00:05:04.570: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 23 00:05:06.613: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 23 00:05:06.622: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 23 00:05:06.628: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:06.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197104, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:08.636: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:08.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197108, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:10.636: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:10.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197108, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:12.636: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:12.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197108, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:14.636: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:14.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197108, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:16.635: INFO: all replica sets need to contain the pod-template-hash label Apr 23 00:05:16.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197108, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:05:18.636: INFO: Apr 23 00:05:18.636: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 23 00:05:18.645: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9933 /apis/apps/v1/namespaces/deployment-9933/deployments/test-rollover-deployment 87f409ea-0174-4dfd-823a-a47d18df917b 10255115 2 2020-04-23 00:05:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023a1e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-23 00:05:02 +0000 UTC,LastTransitionTime:2020-04-23 00:05:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-23 00:05:18 +0000 UTC,LastTransitionTime:2020-04-23 00:05:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 23 00:05:18.649: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-9933 /apis/apps/v1/namespaces/deployment-9933/replicasets/test-rollover-deployment-78df7bc796 b965547d-1c7d-43d1-9cde-73b99f7c29fe 10255103 2 2020-04-23 00:05:04 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 87f409ea-0174-4dfd-823a-a47d18df917b 0xc0047f04d7 0xc0047f04d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047f0548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:05:18.649: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 23 00:05:18.649: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9933 /apis/apps/v1/namespaces/deployment-9933/replicasets/test-rollover-controller 80971be4-c620-4479-b187-34cebf647f41 10255114 2 2020-04-23 00:04:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 87f409ea-0174-4dfd-823a-a47d18df917b 0xc0047f0407 0xc0047f0408}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0047f0468 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:05:18.649: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9933 /apis/apps/v1/namespaces/deployment-9933/replicasets/test-rollover-deployment-f6c94f66c 7c159686-9ccd-4a7d-bafc-ba558e849f9e 10255053 2 2020-04-23 00:05:02 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 87f409ea-0174-4dfd-823a-a47d18df917b 0xc0047f05b0 0xc0047f05b1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047f0628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:05:18.652: INFO: Pod "test-rollover-deployment-78df7bc796-7kf99" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-7kf99 test-rollover-deployment-78df7bc796- deployment-9933 /api/v1/namespaces/deployment-9933/pods/test-rollover-deployment-78df7bc796-7kf99 54d1eb8e-b635-4b23-9987-b757a0d75694 10255071 0 2020-04-23 00:05:04 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 b965547d-1c7d-43d1-9cde-73b99f7c29fe 0xc0047f0bd7 0xc0047f0bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pjs4w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pjs4w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pjs4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:05:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.241,StartTime:2020-04-23 00:05:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:05:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://bbf60f8be66f4ba43eed7beac1e90deda2ebc38429fd030651d7e09a24549c16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:05:18.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9933" for this suite. • [SLOW TEST:23.257 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":105,"skipped":1760,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:05:18.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0423 00:05:19.887465 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 00:05:19.887: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:05:19.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6911" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":106,"skipped":1771,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:05:19.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-2f27b6ed-27e7-4842-bc04-b2af2255ef09 STEP: Creating a pod to test consume secrets Apr 23 00:05:20.152: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2" in namespace "projected-2465" to be "Succeeded or Failed" Apr 23 00:05:20.170: INFO: Pod "pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.045243ms Apr 23 00:05:22.174: INFO: Pod "pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02202819s Apr 23 00:05:24.218: INFO: Pod "pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065362903s STEP: Saw pod success Apr 23 00:05:24.218: INFO: Pod "pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2" satisfied condition "Succeeded or Failed" Apr 23 00:05:24.221: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2 container projected-secret-volume-test: STEP: delete the pod Apr 23 00:05:24.308: INFO: Waiting for pod pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2 to disappear Apr 23 00:05:24.343: INFO: Pod pod-projected-secrets-977a3ac4-9355-4d2a-ad8d-8ae153e787c2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:05:24.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2465" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1783,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:05:24.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:05:24.633: INFO: Creating ReplicaSet my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af Apr 23 00:05:24.670: INFO: Pod name my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af: Found 0 pods out of 1 Apr 23 00:05:29.677: INFO: Pod name my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af: Found 1 pods out of 1 Apr 23 00:05:29.677: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af" is running Apr 23 00:05:29.680: INFO: Pod "my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af-49w78" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:05:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:05:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:05:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:05:24 +0000 UTC Reason: Message:}]) Apr 23 00:05:29.680: INFO: Trying to dial the pod Apr 23 00:05:34.691: INFO: Controller my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af: Got expected result from replica 1 [my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af-49w78]: "my-hostname-basic-fa851f93-0d0c-48de-b29e-a8e30286f5af-49w78", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:05:34.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6777" for this suite. • [SLOW TEST:10.341 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":108,"skipped":1792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:05:34.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 23 00:05:34.826: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 23 00:05:39.835: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:05:39.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3800" for this suite. • [SLOW TEST:5.197 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":109,"skipped":1872,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:05:39.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9735 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 00:05:39.975: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 23 00:05:40.082: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:05:42.194: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:05:44.086: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:05:46.662: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:05:48.086: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:05:50.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:05:52.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:05:54.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:05:56.087: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 23 00:05:56.092: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 23 00:05:58.096: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 23 00:06:02.118: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.158:8080/dial?request=hostname&protocol=http&host=10.244.2.157&port=8080&tries=1'] Namespace:pod-network-test-9735 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:06:02.118: INFO: >>> kubeConfig: /root/.kube/config I0423 00:06:02.157301 7 log.go:172] (0xc001f54840) (0xc0025c70e0) Create stream I0423 00:06:02.157388 7 log.go:172] (0xc001f54840) (0xc0025c70e0) Stream added, broadcasting: 1 I0423 00:06:02.159168 7 log.go:172] (0xc001f54840) Reply frame received for 1 I0423 00:06:02.159234 7 log.go:172] (0xc001f54840) (0xc001f43ae0) Create stream I0423 00:06:02.159253 7 log.go:172] (0xc001f54840) (0xc001f43ae0) Stream added, broadcasting: 3 I0423 00:06:02.160191 7 log.go:172] (0xc001f54840) Reply frame received for 3 I0423 00:06:02.160223 7 log.go:172] (0xc001f54840) (0xc00139b540) Create stream I0423 00:06:02.160233 7 log.go:172] (0xc001f54840) (0xc00139b540) Stream added, broadcasting: 5 I0423 00:06:02.160987 7 log.go:172] (0xc001f54840) Reply frame received for 5 I0423 00:06:02.245942 7 log.go:172] (0xc001f54840) Data frame received for 3 I0423 00:06:02.245980 7 log.go:172] (0xc001f43ae0) (3) Data frame handling I0423 00:06:02.246000 7 log.go:172] (0xc001f43ae0) (3) Data frame sent I0423 00:06:02.246221 7 log.go:172] (0xc001f54840) Data frame received for 5 I0423 00:06:02.246238 7 log.go:172] (0xc00139b540) (5) Data frame handling I0423 00:06:02.246307 7 log.go:172] (0xc001f54840) Data frame received for 3 I0423 00:06:02.246316 7 log.go:172] (0xc001f43ae0) (3) Data frame handling I0423 00:06:02.248099 7 log.go:172] (0xc001f54840) Data frame received for 1 I0423 00:06:02.248116 7 log.go:172] (0xc0025c70e0) (1) Data frame handling I0423 00:06:02.248126 7 log.go:172] (0xc0025c70e0) (1) Data frame sent I0423 00:06:02.248137 7 log.go:172] (0xc001f54840) (0xc0025c70e0) Stream removed, broadcasting: 1 I0423 00:06:02.248157 7 log.go:172] (0xc001f54840) Go away received I0423 00:06:02.248250 7 log.go:172] (0xc001f54840) (0xc0025c70e0) Stream removed, broadcasting: 1 I0423 00:06:02.248267 7 log.go:172] (0xc001f54840) (0xc001f43ae0) Stream removed, broadcasting: 3 I0423 00:06:02.248278 7 log.go:172] (0xc001f54840) (0xc00139b540) Stream removed, broadcasting: 5 Apr 23 00:06:02.248: INFO: Waiting for responses: map[] Apr 23 00:06:02.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.158:8080/dial?request=hostname&protocol=http&host=10.244.1.245&port=8080&tries=1'] Namespace:pod-network-test-9735 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:06:02.251: INFO: >>> kubeConfig: /root/.kube/config I0423 00:06:02.279867 7 log.go:172] (0xc0014fe4d0) (0xc00139bea0) Create stream I0423 00:06:02.279899 7 log.go:172] (0xc0014fe4d0) (0xc00139bea0) Stream added, broadcasting: 1 I0423 00:06:02.281605 7 log.go:172] (0xc0014fe4d0) Reply frame received for 1 I0423 00:06:02.281637 7 log.go:172] (0xc0014fe4d0) (0xc001eca000) Create stream I0423 00:06:02.281654 7 log.go:172] (0xc0014fe4d0) (0xc001eca000) Stream added, broadcasting: 3 I0423 00:06:02.282600 7 log.go:172] (0xc0014fe4d0) Reply frame received for 3 I0423 00:06:02.282627 7 log.go:172] (0xc0014fe4d0) (0xc0025c7180) Create stream I0423 00:06:02.282643 7 log.go:172] (0xc0014fe4d0) (0xc0025c7180) Stream added, broadcasting: 5 I0423 00:06:02.283440 7 log.go:172] (0xc0014fe4d0) Reply frame received for 5 I0423 00:06:02.340989 7 log.go:172] (0xc0014fe4d0) Data frame received for 3 I0423 00:06:02.341013 7 log.go:172] (0xc001eca000) (3) Data frame handling I0423 00:06:02.341026 7 log.go:172] (0xc001eca000) (3) Data frame sent I0423 00:06:02.342062 7 log.go:172] (0xc0014fe4d0) Data frame received for 5 I0423 00:06:02.342085 7 log.go:172] (0xc0025c7180) (5) Data frame handling I0423 00:06:02.342105 7 log.go:172] (0xc0014fe4d0) Data frame received for 3 I0423 00:06:02.342116 7 log.go:172] (0xc001eca000) (3) Data frame handling I0423 00:06:02.343104 7 log.go:172] (0xc0014fe4d0) Data frame received for 1 I0423 00:06:02.343132 7 log.go:172] (0xc00139bea0) (1) Data frame handling I0423 00:06:02.343167 7 log.go:172] (0xc00139bea0) (1) Data frame sent I0423 00:06:02.343186 7 log.go:172] (0xc0014fe4d0) (0xc00139bea0) Stream removed, broadcasting: 1 I0423 00:06:02.343204 7 log.go:172] (0xc0014fe4d0) Go away received I0423 00:06:02.343319 7 log.go:172] (0xc0014fe4d0) (0xc00139bea0) Stream removed, broadcasting: 1 I0423 00:06:02.343340 7 log.go:172] (0xc0014fe4d0) (0xc001eca000) Stream removed, broadcasting: 3 I0423 00:06:02.343352 7 log.go:172] (0xc0014fe4d0) (0xc0025c7180) Stream removed, broadcasting: 5 Apr 23 00:06:02.343: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:02.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9735" for this suite. • [SLOW TEST:22.450 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1877,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:02.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:06:02.444: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:06.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4305" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1894,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:06.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 23 00:06:06.576: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:20.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1297" for this suite. • [SLOW TEST:14.359 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":112,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:20.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 23 00:06:20.911: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix113767152/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:20.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1673" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":113,"skipped":1938,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:20.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 23 00:06:25.660: INFO: Successfully updated pod "labelsupdatea23d1ba1-4aa6-4b49-8681-d923c1e4b666" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-171" for this suite. • [SLOW TEST:6.740 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:27.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 23 00:06:27.836: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 00:06:27.851: INFO: Waiting for terminating namespaces to be deleted... Apr 23 00:06:27.854: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 23 00:06:27.858: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.858: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:06:27.858: INFO: labelsupdatea23d1ba1-4aa6-4b49-8681-d923c1e4b666 from projected-171 started at 2020-04-23 00:06:21 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.858: INFO: Container client-container ready: true, restart count 0 Apr 23 00:06:27.858: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.858: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:06:27.858: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 23 00:06:27.862: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.862: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:06:27.862: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.862: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:06:27.862: INFO: pod-logs-websocket-cf9faa6f-10d7-4430-bc07-1d5dcc052262 from pods-4305 started at 2020-04-23 00:06:02 +0000 UTC (1 container statuses recorded) Apr 23 00:06:27.862: INFO: Container main ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3293c71a-dfcd-4d19-bbb0-96e2a38d6a2c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3293c71a-dfcd-4d19-bbb0-96e2a38d6a2c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3293c71a-dfcd-4d19-bbb0-96e2a38d6a2c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:35.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-538" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.274 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":115,"skipped":1998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:35.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:50.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5037" for this suite. • [SLOW TEST:14.090 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":116,"skipped":2037,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:50.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-2ad1a117-6dd7-4f23-b85d-33995a96c9cb STEP: Creating a pod to test consume secrets Apr 23 00:06:50.193: INFO: Waiting up to 5m0s for pod "pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08" in namespace "secrets-2217" to be "Succeeded or Failed" Apr 23 00:06:50.197: INFO: Pod "pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08": Phase="Pending", Reason="", readiness=false. Elapsed: 3.335892ms Apr 23 00:06:52.215: INFO: Pod "pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021360163s Apr 23 00:06:54.219: INFO: Pod "pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025398779s STEP: Saw pod success Apr 23 00:06:54.219: INFO: Pod "pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08" satisfied condition "Succeeded or Failed" Apr 23 00:06:54.222: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08 container secret-volume-test: STEP: delete the pod Apr 23 00:06:54.240: INFO: Waiting for pod pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08 to disappear Apr 23 00:06:54.244: INFO: Pod pod-secrets-6250b61f-e935-44cc-bea5-1fbc1e113d08 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:06:54.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2217" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2044,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:06:54.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:07:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6448" for this suite. • [SLOW TEST:16.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":118,"skipped":2047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:07:10.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7023 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7023;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7023 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7023;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7023.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7023.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7023.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7023.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7023.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7023.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7023.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 21.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.21_udp@PTR;check="$$(dig +tcp +noall +answer +search 21.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.21_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7023 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7023;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7023 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7023;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7023.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7023.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7023.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7023.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7023.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7023.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7023.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7023.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7023.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 21.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.21_udp@PTR;check="$$(dig +tcp +noall +answer +search 21.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.21_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:07:16.543: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.546: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.550: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.554: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.557: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.563: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.566: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.590: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.594: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.597: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.604: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.608: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.611: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.613: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:16.630: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:21.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.642: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.657: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.678: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.681: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.683: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.685: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.687: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.695: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:21.710: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:26.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.642: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.649: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.658: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.677: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.680: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.683: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.688: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.694: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.699: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:26.720: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:31.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.650: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.653: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.656: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.659: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.680: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.683: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.687: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.692: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.702: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:31.721: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:36.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.654: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.657: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.661: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.684: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.687: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.690: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.693: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.696: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.702: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.704: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:36.727: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:41.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.638: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.642: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.657: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.674: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.679: INFO: Unable to read jessie_udp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023 from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.684: INFO: Unable to read jessie_udp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.693: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc from pod dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c: the server could not find the requested resource (get pods dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c) Apr 23 00:07:41.710: INFO: Lookups using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7023 wheezy_tcp@dns-test-service.dns-7023 wheezy_udp@dns-test-service.dns-7023.svc wheezy_tcp@dns-test-service.dns-7023.svc wheezy_udp@_http._tcp.dns-test-service.dns-7023.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7023.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7023 jessie_tcp@dns-test-service.dns-7023 jessie_udp@dns-test-service.dns-7023.svc jessie_tcp@dns-test-service.dns-7023.svc jessie_udp@_http._tcp.dns-test-service.dns-7023.svc jessie_tcp@_http._tcp.dns-test-service.dns-7023.svc] Apr 23 00:07:46.731: INFO: DNS probes using dns-7023/dns-test-0b2b262e-5169-4ece-a002-0533825fcc7c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:07:47.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7023" for this suite. • [SLOW TEST:37.071 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":119,"skipped":2079,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:07:47.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5723 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5723 STEP: creating replication controller externalsvc in namespace services-5723 I0423 00:07:47.696100 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5723, replica count: 2 I0423 00:07:50.746644 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:07:53.746875 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 23 00:07:53.799: INFO: Creating new exec pod Apr 23 00:07:57.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5723 execpod8qd6g -- /bin/sh -x -c nslookup nodeport-service' Apr 23 00:07:58.023: INFO: stderr: "I0423 00:07:57.947136 2019 log.go:172] (0xc0000e4790) (0xc0003eaa00) Create stream\nI0423 00:07:57.947199 2019 log.go:172] (0xc0000e4790) (0xc0003eaa00) Stream added, broadcasting: 1\nI0423 00:07:57.948979 2019 log.go:172] (0xc0000e4790) Reply frame received for 1\nI0423 00:07:57.949020 2019 log.go:172] (0xc0000e4790) (0xc0007d60a0) Create stream\nI0423 00:07:57.949027 2019 log.go:172] (0xc0000e4790) (0xc0007d60a0) Stream added, broadcasting: 3\nI0423 00:07:57.949916 2019 log.go:172] (0xc0000e4790) Reply frame received for 3\nI0423 00:07:57.949949 2019 log.go:172] (0xc0000e4790) (0xc0006eb180) Create stream\nI0423 00:07:57.949957 2019 log.go:172] (0xc0000e4790) (0xc0006eb180) Stream added, broadcasting: 5\nI0423 00:07:57.950650 2019 log.go:172] (0xc0000e4790) Reply frame received for 5\nI0423 00:07:58.010447 2019 log.go:172] (0xc0000e4790) Data frame received for 5\nI0423 00:07:58.010480 2019 log.go:172] (0xc0006eb180) (5) Data frame handling\nI0423 00:07:58.010506 2019 log.go:172] (0xc0006eb180) (5) Data frame sent\n+ nslookup nodeport-service\nI0423 00:07:58.015045 2019 log.go:172] (0xc0000e4790) Data frame received for 3\nI0423 00:07:58.015074 2019 log.go:172] (0xc0007d60a0) (3) Data frame handling\nI0423 00:07:58.015105 2019 log.go:172] (0xc0007d60a0) (3) Data frame sent\nI0423 00:07:58.016101 2019 log.go:172] (0xc0000e4790) Data frame received for 3\nI0423 00:07:58.016131 2019 log.go:172] (0xc0007d60a0) (3) Data frame handling\nI0423 00:07:58.016152 2019 log.go:172] (0xc0007d60a0) (3) Data frame sent\nI0423 00:07:58.016421 2019 log.go:172] (0xc0000e4790) Data frame received for 5\nI0423 00:07:58.016456 2019 log.go:172] (0xc0006eb180) (5) Data frame handling\nI0423 00:07:58.016627 2019 log.go:172] (0xc0000e4790) Data frame received for 3\nI0423 00:07:58.016696 2019 log.go:172] (0xc0007d60a0) (3) Data frame handling\nI0423 00:07:58.018443 2019 log.go:172] (0xc0000e4790) Data frame received for 1\nI0423 00:07:58.018471 2019 log.go:172] (0xc0003eaa00) (1) Data frame handling\nI0423 00:07:58.018494 2019 log.go:172] (0xc0003eaa00) (1) Data frame sent\nI0423 00:07:58.018507 2019 log.go:172] (0xc0000e4790) (0xc0003eaa00) Stream removed, broadcasting: 1\nI0423 00:07:58.018523 2019 log.go:172] (0xc0000e4790) Go away received\nI0423 00:07:58.018956 2019 log.go:172] (0xc0000e4790) (0xc0003eaa00) Stream removed, broadcasting: 1\nI0423 00:07:58.018986 2019 log.go:172] (0xc0000e4790) (0xc0007d60a0) Stream removed, broadcasting: 3\nI0423 00:07:58.019001 2019 log.go:172] (0xc0000e4790) (0xc0006eb180) Stream removed, broadcasting: 5\n" Apr 23 00:07:58.024: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5723.svc.cluster.local\tcanonical name = externalsvc.services-5723.svc.cluster.local.\nName:\texternalsvc.services-5723.svc.cluster.local\nAddress: 10.96.144.99\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5723, will wait for the garbage collector to delete the pods Apr 23 00:07:58.090: INFO: Deleting ReplicationController externalsvc took: 13.271566ms Apr 23 00:07:58.390: INFO: Terminating ReplicationController externalsvc pods took: 300.224631ms Apr 23 00:08:13.050: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:13.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5723" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.616 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":120,"skipped":2079,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:13.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 00:08:17.237: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1683" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2094,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:17.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:17.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1814" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":122,"skipped":2098,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:17.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:08:17.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:08:19.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197298, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:08:22.986: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5177" for this suite. STEP: Destroying namespace "webhook-5177-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.534 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":123,"skipped":2109,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:23.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:24.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4719" for this suite. STEP: Destroying namespace "nspatchtest-9feaaffa-4cfe-44fb-a9b9-526a04814f7c-7030" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":124,"skipped":2110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:24.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-05f92627-4a49-4869-84db-02f86cca7627 STEP: Creating a pod to test consume configMaps Apr 23 00:08:24.358: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b" in namespace "projected-1726" to be "Succeeded or Failed" Apr 23 00:08:24.490: INFO: Pod "pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b": Phase="Pending", Reason="", readiness=false. Elapsed: 132.224495ms Apr 23 00:08:26.494: INFO: Pod "pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136595001s Apr 23 00:08:28.499: INFO: Pod "pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141103042s STEP: Saw pod success Apr 23 00:08:28.499: INFO: Pod "pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b" satisfied condition "Succeeded or Failed" Apr 23 00:08:28.502: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b container projected-configmap-volume-test: STEP: delete the pod Apr 23 00:08:28.545: INFO: Waiting for pod pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b to disappear Apr 23 00:08:28.564: INFO: Pod pod-projected-configmaps-1f111ebc-c6b9-4401-804e-b367961b572b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:28.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1726" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2136,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:28.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:08:28.641: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 23 00:08:31.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 create -f -' Apr 23 00:08:34.264: INFO: stderr: "" Apr 23 00:08:34.265: INFO: stdout: "e2e-test-crd-publish-openapi-8419-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 23 00:08:34.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 delete e2e-test-crd-publish-openapi-8419-crds test-cr' Apr 23 00:08:34.440: INFO: stderr: "" Apr 23 00:08:34.440: INFO: stdout: "e2e-test-crd-publish-openapi-8419-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 23 00:08:34.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 apply -f -' Apr 23 00:08:34.788: INFO: stderr: "" Apr 23 00:08:34.788: INFO: stdout: "e2e-test-crd-publish-openapi-8419-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 23 00:08:34.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 delete e2e-test-crd-publish-openapi-8419-crds test-cr' Apr 23 00:08:34.894: INFO: stderr: "" Apr 23 00:08:34.894: INFO: stdout: "e2e-test-crd-publish-openapi-8419-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 23 00:08:34.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8419-crds' Apr 23 00:08:35.126: INFO: stderr: "" Apr 23 00:08:35.126: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8419-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6477" for this suite. • [SLOW TEST:8.467 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":126,"skipped":2144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:37.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 23 00:08:37.118: INFO: Waiting up to 5m0s for pod "pod-db425285-89d5-41a4-9a93-4cf25e441aa3" in namespace "emptydir-9754" to be "Succeeded or Failed" Apr 23 00:08:37.137: INFO: Pod "pod-db425285-89d5-41a4-9a93-4cf25e441aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.403619ms Apr 23 00:08:39.142: INFO: Pod "pod-db425285-89d5-41a4-9a93-4cf25e441aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024314526s Apr 23 00:08:41.146: INFO: Pod "pod-db425285-89d5-41a4-9a93-4cf25e441aa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028386977s STEP: Saw pod success Apr 23 00:08:41.146: INFO: Pod "pod-db425285-89d5-41a4-9a93-4cf25e441aa3" satisfied condition "Succeeded or Failed" Apr 23 00:08:41.149: INFO: Trying to get logs from node latest-worker2 pod pod-db425285-89d5-41a4-9a93-4cf25e441aa3 container test-container: STEP: delete the pod Apr 23 00:08:41.167: INFO: Waiting for pod pod-db425285-89d5-41a4-9a93-4cf25e441aa3 to disappear Apr 23 00:08:41.172: INFO: Pod pod-db425285-89d5-41a4-9a93-4cf25e441aa3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:41.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9754" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:41.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:08:56.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3246" for this suite. STEP: Destroying namespace "nsdeletetest-9001" for this suite. Apr 23 00:08:56.446: INFO: Namespace nsdeletetest-9001 was already deleted STEP: Destroying namespace "nsdeletetest-6799" for this suite. • [SLOW TEST:15.270 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":128,"skipped":2243,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:08:56.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 23 00:08:56.564: INFO: Waiting up to 5m0s for pod "pod-157accde-5297-4e59-85ae-e20056b3c350" in namespace "emptydir-6513" to be "Succeeded or Failed" Apr 23 00:08:56.572: INFO: Pod "pod-157accde-5297-4e59-85ae-e20056b3c350": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031608ms Apr 23 00:08:58.576: INFO: Pod "pod-157accde-5297-4e59-85ae-e20056b3c350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012018345s Apr 23 00:09:00.580: INFO: Pod "pod-157accde-5297-4e59-85ae-e20056b3c350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016482741s STEP: Saw pod success Apr 23 00:09:00.580: INFO: Pod "pod-157accde-5297-4e59-85ae-e20056b3c350" satisfied condition "Succeeded or Failed" Apr 23 00:09:00.584: INFO: Trying to get logs from node latest-worker pod pod-157accde-5297-4e59-85ae-e20056b3c350 container test-container: STEP: delete the pod Apr 23 00:09:00.617: INFO: Waiting for pod pod-157accde-5297-4e59-85ae-e20056b3c350 to disappear Apr 23 00:09:00.621: INFO: Pod pod-157accde-5297-4e59-85ae-e20056b3c350 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:00.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6513" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:00.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:07.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2236" for this suite. • [SLOW TEST:7.093 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":130,"skipped":2301,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:07.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4f4b6e6c-fc14-4239-9c26-a286b1ba8da0 STEP: Creating a pod to test consume secrets Apr 23 00:09:07.820: INFO: Waiting up to 5m0s for pod "pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca" in namespace "secrets-4498" to be "Succeeded or Failed" Apr 23 00:09:07.839: INFO: Pod "pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.389372ms Apr 23 00:09:09.843: INFO: Pod "pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022542442s Apr 23 00:09:11.847: INFO: Pod "pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026836838s STEP: Saw pod success Apr 23 00:09:11.847: INFO: Pod "pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca" satisfied condition "Succeeded or Failed" Apr 23 00:09:11.851: INFO: Trying to get logs from node latest-worker pod pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca container secret-volume-test: STEP: delete the pod Apr 23 00:09:11.882: INFO: Waiting for pod pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca to disappear Apr 23 00:09:11.886: INFO: Pod pod-secrets-f78f7b9a-eced-4282-8c21-bf596f3f32ca no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:11.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4498" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2320,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:11.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c8561786-c6eb-4b68-8fb6-a72406ce81ce STEP: Creating a pod to test consume secrets Apr 23 00:09:12.030: INFO: Waiting up to 5m0s for pod "pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb" in namespace "secrets-1495" to be "Succeeded or Failed" Apr 23 00:09:12.046: INFO: Pod "pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.54065ms Apr 23 00:09:14.050: INFO: Pod "pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020679518s Apr 23 00:09:16.055: INFO: Pod "pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025109744s STEP: Saw pod success Apr 23 00:09:16.055: INFO: Pod "pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb" satisfied condition "Succeeded or Failed" Apr 23 00:09:16.058: INFO: Trying to get logs from node latest-worker pod pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb container secret-volume-test: STEP: delete the pod Apr 23 00:09:16.076: INFO: Waiting for pod pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb to disappear Apr 23 00:09:16.081: INFO: Pod pod-secrets-75d6e21c-5014-46cf-ae6b-c1e442db82bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:16.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1495" for this suite. STEP: Destroying namespace "secret-namespace-5414" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2328,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:16.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-d0fe0945-a598-4848-943c-ba3be9248a63 STEP: Creating a pod to test consume secrets Apr 23 00:09:16.239: INFO: Waiting up to 5m0s for pod "pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2" in namespace "secrets-4429" to be "Succeeded or Failed" Apr 23 00:09:16.243: INFO: Pod "pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322323ms Apr 23 00:09:18.247: INFO: Pod "pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007057291s Apr 23 00:09:20.250: INFO: Pod "pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010967999s STEP: Saw pod success Apr 23 00:09:20.250: INFO: Pod "pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2" satisfied condition "Succeeded or Failed" Apr 23 00:09:20.254: INFO: Trying to get logs from node latest-worker pod pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2 container secret-volume-test: STEP: delete the pod Apr 23 00:09:20.290: INFO: Waiting for pod pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2 to disappear Apr 23 00:09:20.303: INFO: Pod pod-secrets-1ca71a00-3be4-4152-8e6e-e21a6f5e87d2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:20.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4429" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2337,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:20.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:09:20.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec" in namespace "projected-982" to be "Succeeded or Failed" Apr 23 00:09:20.399: INFO: Pod "downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469901ms Apr 23 00:09:22.403: INFO: Pod "downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007540692s Apr 23 00:09:24.407: INFO: Pod "downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012272639s STEP: Saw pod success Apr 23 00:09:24.407: INFO: Pod "downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec" satisfied condition "Succeeded or Failed" Apr 23 00:09:24.410: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec container client-container: STEP: delete the pod Apr 23 00:09:24.531: INFO: Waiting for pod downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec to disappear Apr 23 00:09:24.544: INFO: Pod downwardapi-volume-de7ab74e-5fe7-482c-96d8-8214bdee53ec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:24.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-982" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2339,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:24.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 23 00:09:24.654: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6176" to be "Succeeded or Failed" Apr 23 00:09:24.658: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.998495ms Apr 23 00:09:26.662: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008431777s Apr 23 00:09:28.666: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011935172s STEP: Saw pod success Apr 23 00:09:28.666: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 23 00:09:28.668: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 23 00:09:28.690: INFO: Waiting for pod pod-host-path-test to disappear Apr 23 00:09:28.707: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:28.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6176" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:28.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 23 00:09:28.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:28.826: INFO: Number of nodes with available pods: 0 Apr 23 00:09:28.826: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:29.831: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:29.834: INFO: Number of nodes with available pods: 0 Apr 23 00:09:29.834: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:30.855: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:30.859: INFO: Number of nodes with available pods: 0 Apr 23 00:09:30.859: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:31.857: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:31.873: INFO: Number of nodes with available pods: 1 Apr 23 00:09:31.873: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:32.831: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:32.836: INFO: Number of nodes with available pods: 2 Apr 23 00:09:32.836: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 23 00:09:32.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:32.895: INFO: Number of nodes with available pods: 1 Apr 23 00:09:32.896: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:33.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:33.902: INFO: Number of nodes with available pods: 1 Apr 23 00:09:33.902: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:34.909: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:34.927: INFO: Number of nodes with available pods: 1 Apr 23 00:09:34.927: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:35.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:35.905: INFO: Number of nodes with available pods: 1 Apr 23 00:09:35.905: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:36.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:36.907: INFO: Number of nodes with available pods: 1 Apr 23 00:09:36.907: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:37.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:37.904: INFO: Number of nodes with available pods: 1 Apr 23 00:09:37.904: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:38.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:38.905: INFO: Number of nodes with available pods: 1 Apr 23 00:09:38.905: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:39.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:39.905: INFO: Number of nodes with available pods: 1 Apr 23 00:09:39.905: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:40.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:40.905: INFO: Number of nodes with available pods: 1 Apr 23 00:09:40.905: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:41.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:41.904: INFO: Number of nodes with available pods: 1 Apr 23 00:09:41.904: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:42.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:42.908: INFO: Number of nodes with available pods: 1 Apr 23 00:09:42.908: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:43.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:43.903: INFO: Number of nodes with available pods: 1 Apr 23 00:09:43.904: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:44.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:44.901: INFO: Number of nodes with available pods: 1 Apr 23 00:09:44.901: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:09:45.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 00:09:45.905: INFO: Number of nodes with available pods: 2 Apr 23 00:09:45.905: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8865, will wait for the garbage collector to delete the pods Apr 23 00:09:45.966: INFO: Deleting DaemonSet.extensions daemon-set took: 6.34855ms Apr 23 00:09:46.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.284392ms Apr 23 00:09:53.070: INFO: Number of nodes with available pods: 0 Apr 23 00:09:53.070: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 00:09:53.073: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8865/daemonsets","resourceVersion":"10257078"},"items":null} Apr 23 00:09:53.076: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8865/pods","resourceVersion":"10257078"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:53.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8865" for this suite. • [SLOW TEST:24.377 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":136,"skipped":2379,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:53.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:09:53.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b" in namespace "projected-5458" to be "Succeeded or Failed" Apr 23 00:09:53.168: INFO: Pod "downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.215031ms Apr 23 00:09:55.172: INFO: Pod "downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014394612s Apr 23 00:09:57.176: INFO: Pod "downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01890104s STEP: Saw pod success Apr 23 00:09:57.176: INFO: Pod "downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b" satisfied condition "Succeeded or Failed" Apr 23 00:09:57.180: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b container client-container: STEP: delete the pod Apr 23 00:09:57.234: INFO: Waiting for pod downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b to disappear Apr 23 00:09:57.237: INFO: Pod downwardapi-volume-e847383a-67aa-4790-bb79-d247d23a214b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:09:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5458" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2384,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:09:57.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:09:57.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e" in namespace "downward-api-3071" to be "Succeeded or Failed" Apr 23 00:09:57.401: INFO: Pod "downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902342ms Apr 23 00:09:59.423: INFO: Pod "downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025927138s Apr 23 00:10:01.464: INFO: Pod "downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066992929s STEP: Saw pod success Apr 23 00:10:01.465: INFO: Pod "downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e" satisfied condition "Succeeded or Failed" Apr 23 00:10:01.468: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e container client-container: STEP: delete the pod Apr 23 00:10:01.491: INFO: Waiting for pod downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e to disappear Apr 23 00:10:01.507: INFO: Pod downwardapi-volume-ffbc6ff9-0f75-4774-95be-f50457e8115e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:01.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3071" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:01.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-113e7a7b-f72c-4e1e-b869-265d4fc0b14b STEP: Creating a pod to test consume configMaps Apr 23 00:10:01.614: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0" in namespace "projected-484" to be "Succeeded or Failed" Apr 23 00:10:01.628: INFO: Pod "pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.916631ms Apr 23 00:10:03.631: INFO: Pod "pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017339295s Apr 23 00:10:05.646: INFO: Pod "pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032674211s STEP: Saw pod success Apr 23 00:10:05.646: INFO: Pod "pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0" satisfied condition "Succeeded or Failed" Apr 23 00:10:05.649: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0 container projected-configmap-volume-test: STEP: delete the pod Apr 23 00:10:05.676: INFO: Waiting for pod pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0 to disappear Apr 23 00:10:05.687: INFO: Pod pod-projected-configmaps-26e42ef2-1ebb-44f8-9b9a-5c4af72e46d0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:05.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-484" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:05.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 23 00:10:12.259: INFO: Successfully updated pod "adopt-release-sk66t" STEP: Checking that the Job readopts the Pod Apr 23 00:10:12.259: INFO: Waiting up to 15m0s for pod "adopt-release-sk66t" in namespace "job-3622" to be "adopted" Apr 23 00:10:12.264: INFO: Pod "adopt-release-sk66t": Phase="Running", Reason="", readiness=true. Elapsed: 4.659263ms Apr 23 00:10:14.267: INFO: Pod "adopt-release-sk66t": Phase="Running", Reason="", readiness=true. Elapsed: 2.00814634s Apr 23 00:10:14.267: INFO: Pod "adopt-release-sk66t" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 23 00:10:14.777: INFO: Successfully updated pod "adopt-release-sk66t" STEP: Checking that the Job releases the Pod Apr 23 00:10:14.777: INFO: Waiting up to 15m0s for pod "adopt-release-sk66t" in namespace "job-3622" to be "released" Apr 23 00:10:14.794: INFO: Pod "adopt-release-sk66t": Phase="Running", Reason="", readiness=true. Elapsed: 17.455961ms Apr 23 00:10:16.798: INFO: Pod "adopt-release-sk66t": Phase="Running", Reason="", readiness=true. Elapsed: 2.021236809s Apr 23 00:10:16.798: INFO: Pod "adopt-release-sk66t" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:16.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3622" for this suite. • [SLOW TEST:11.110 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":140,"skipped":2495,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:16.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:10:16.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9" in namespace "downward-api-8521" to be "Succeeded or Failed" Apr 23 00:10:16.911: INFO: Pod "downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.079106ms Apr 23 00:10:18.915: INFO: Pod "downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026064768s Apr 23 00:10:20.920: INFO: Pod "downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03036321s STEP: Saw pod success Apr 23 00:10:20.920: INFO: Pod "downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9" satisfied condition "Succeeded or Failed" Apr 23 00:10:20.923: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9 container client-container: STEP: delete the pod Apr 23 00:10:20.947: INFO: Waiting for pod downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9 to disappear Apr 23 00:10:20.963: INFO: Pod downwardapi-volume-47ad8233-134a-44e5-94d0-9def4e33f7a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:20.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8521" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2495,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:20.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:32.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2553" for this suite. • [SLOW TEST:11.176 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":142,"skipped":2506,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:32.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-0d1c614c-51bc-4a01-be5c-5f704c9bad0c STEP: Creating a pod to test consume configMaps Apr 23 00:10:32.273: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607" in namespace "projected-8231" to be "Succeeded or Failed" Apr 23 00:10:32.284: INFO: Pod "pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607": Phase="Pending", Reason="", readiness=false. Elapsed: 10.323662ms Apr 23 00:10:34.372: INFO: Pod "pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098230109s Apr 23 00:10:36.376: INFO: Pod "pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102389434s STEP: Saw pod success Apr 23 00:10:36.376: INFO: Pod "pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607" satisfied condition "Succeeded or Failed" Apr 23 00:10:36.379: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607 container projected-configmap-volume-test: STEP: delete the pod Apr 23 00:10:36.397: INFO: Waiting for pod pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607 to disappear Apr 23 00:10:36.402: INFO: Pod pod-projected-configmaps-60b3ee9b-42e0-4fcf-8cda-5e68795ca607 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:36.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8231" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2506,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:36.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 23 00:10:44.635: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:44.677: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 00:10:46.677: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:46.681: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 00:10:48.677: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:48.681: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 00:10:50.677: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:50.682: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 00:10:52.677: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:52.681: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 00:10:54.677: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 00:10:54.684: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:10:54.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2472" for this suite. • [SLOW TEST:18.260 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:10:54.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 23 00:10:54.769: INFO: >>> kubeConfig: /root/.kube/config Apr 23 00:10:57.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:11:08.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4629" for this suite. • [SLOW TEST:13.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":145,"skipped":2574,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:11:08.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 23 00:11:08.351: INFO: Waiting up to 5m0s for pod "var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7" in namespace "var-expansion-4393" to be "Succeeded or Failed" Apr 23 00:11:08.355: INFO: Pod "var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.927545ms Apr 23 00:11:10.358: INFO: Pod "var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007689262s Apr 23 00:11:12.363: INFO: Pod "var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012268003s STEP: Saw pod success Apr 23 00:11:12.363: INFO: Pod "var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7" satisfied condition "Succeeded or Failed" Apr 23 00:11:12.366: INFO: Trying to get logs from node latest-worker2 pod var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7 container dapi-container: STEP: delete the pod Apr 23 00:11:12.385: INFO: Waiting for pod var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7 to disappear Apr 23 00:11:12.390: INFO: Pod var-expansion-8283d3ff-130d-4c44-b49a-9783db1746d7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:11:12.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4393" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2587,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:11:12.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 23 00:11:12.472: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257628 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:11:12.473: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257628 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 23 00:11:22.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257675 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:11:22.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257675 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 23 00:11:32.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257704 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:11:32.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257704 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 23 00:11:42.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257734 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:11:42.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-a c354f130-afae-4517-9086-673d42db8f03 10257734 0 2020-04-23 00:11:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 23 00:11:52.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-b 720cbb9b-2ff5-4571-b312-b3628f6ac699 10257763 0 2020-04-23 00:11:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:11:52.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-b 720cbb9b-2ff5-4571-b312-b3628f6ac699 10257763 0 2020-04-23 00:11:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 23 00:12:02.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-b 720cbb9b-2ff5-4571-b312-b3628f6ac699 10257792 0 2020-04-23 00:11:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:12:02.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4135 /api/v1/namespaces/watch-4135/configmaps/e2e-watch-test-configmap-b 720cbb9b-2ff5-4571-b312-b3628f6ac699 10257792 0 2020-04-23 00:11:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:12.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4135" for this suite. • [SLOW TEST:60.134 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":147,"skipped":2603,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:12.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:12:12.615: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 24.755796ms) Apr 23 00:12:12.619: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.876312ms) Apr 23 00:12:12.623: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.52465ms) Apr 23 00:12:12.627: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.467433ms) Apr 23 00:12:12.630: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.380335ms) Apr 23 00:12:12.634: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.535315ms) Apr 23 00:12:12.637: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.676182ms) Apr 23 00:12:12.641: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.835881ms) Apr 23 00:12:12.645: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.974504ms) Apr 23 00:12:12.649: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.840099ms) Apr 23 00:12:12.653: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.906997ms) Apr 23 00:12:12.657: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.823992ms) Apr 23 00:12:12.672: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.402319ms) Apr 23 00:12:12.676: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.721891ms) Apr 23 00:12:12.680: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.608947ms) Apr 23 00:12:12.683: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.444517ms) Apr 23 00:12:12.687: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.42444ms) Apr 23 00:12:12.690: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.1707ms) Apr 23 00:12:12.694: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.523165ms) Apr 23 00:12:12.697: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.32442ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:12.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2773" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":148,"skipped":2605,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:12.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:12:12.770: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:13.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7039" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":149,"skipped":2607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:13.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:12:14.034: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 23 00:12:19.038: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 00:12:19.038: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 23 00:12:19.074: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3833 /apis/apps/v1/namespaces/deployment-3833/deployments/test-cleanup-deployment 202c60b8-c41f-4a8a-b2c6-aa9d9e368efa 10257897 1 2020-04-23 00:12:19 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052a9d98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 23 00:12:19.101: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-3833 /apis/apps/v1/namespaces/deployment-3833/replicasets/test-cleanup-deployment-577c77b589 bfd8ad46-8fd9-4aed-a0e1-2a2295090647 10257899 1 2020-04-23 00:12:19 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 202c60b8-c41f-4a8a-b2c6-aa9d9e368efa 0xc00470c0e7 0xc00470c0e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00470c158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:12:19.101: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 23 00:12:19.101: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3833 /apis/apps/v1/namespaces/deployment-3833/replicasets/test-cleanup-controller fe333687-de5a-434a-908e-77e6970487c1 10257898 1 2020-04-23 00:12:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 202c60b8-c41f-4a8a-b2c6-aa9d9e368efa 0xc00470c017 0xc00470c018}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00470c078 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:12:19.158: INFO: Pod "test-cleanup-controller-bh5p2" is available: &Pod{ObjectMeta:{test-cleanup-controller-bh5p2 test-cleanup-controller- deployment-3833 /api/v1/namespaces/deployment-3833/pods/test-cleanup-controller-bh5p2 30bacbea-d24f-4342-ad09-5bc4c94db2b5 10257873 0 2020-04-23 00:12:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller fe333687-de5a-434a-908e-77e6970487c1 0xc0036c6257 0xc0036c6258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5dzv4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5dzv4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5dzv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:12:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:12:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.184,StartTime:2020-04-23 00:12:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:12:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7ec29f62e0448fef3216870cfb5d288d256289d6df98c3a30f6bdae09ba503e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:12:19.158: INFO: Pod "test-cleanup-deployment-577c77b589-25gck" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-25gck test-cleanup-deployment-577c77b589- deployment-3833 /api/v1/namespaces/deployment-3833/pods/test-cleanup-deployment-577c77b589-25gck ca6497fc-602c-4d8b-b12d-7982ae4ba8be 10257904 0 2020-04-23 00:12:19 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 bfd8ad46-8fd9-4aed-a0e1-2a2295090647 0xc0036c63e7 0xc0036c63e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5dzv4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5dzv4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5dzv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:12:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:19.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3833" for this suite. • [SLOW TEST:5.251 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":150,"skipped":2631,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:19.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6516.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6516.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6516.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6516.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6516.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6516.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:12:25.389: INFO: DNS probes using dns-6516/dns-test-3952a63a-f1d3-433b-bba5-06033bb8b0ef succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:25.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6516" for this suite. • [SLOW TEST:6.356 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":151,"skipped":2632,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:25.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-1852 STEP: creating replication controller nodeport-test in namespace services-1852 I0423 00:12:25.955687 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1852, replica count: 2 I0423 00:12:29.006638 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:12:32.006889 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 00:12:32.006: INFO: Creating new exec pod Apr 23 00:12:37.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1852 execpodwrmtq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 23 00:12:37.275: INFO: stderr: "I0423 00:12:37.171321 2154 log.go:172] (0xc000940630) (0xc000990000) Create stream\nI0423 00:12:37.171387 2154 log.go:172] (0xc000940630) (0xc000990000) Stream added, broadcasting: 1\nI0423 00:12:37.174371 2154 log.go:172] (0xc000940630) Reply frame received for 1\nI0423 00:12:37.174426 2154 log.go:172] (0xc000940630) (0xc0009900a0) Create stream\nI0423 00:12:37.174441 2154 log.go:172] (0xc000940630) (0xc0009900a0) Stream added, broadcasting: 3\nI0423 00:12:37.175359 2154 log.go:172] (0xc000940630) Reply frame received for 3\nI0423 00:12:37.175409 2154 log.go:172] (0xc000940630) (0xc000a0e000) Create stream\nI0423 00:12:37.175421 2154 log.go:172] (0xc000940630) (0xc000a0e000) Stream added, broadcasting: 5\nI0423 00:12:37.176431 2154 log.go:172] (0xc000940630) Reply frame received for 5\nI0423 00:12:37.268805 2154 log.go:172] (0xc000940630) Data frame received for 5\nI0423 00:12:37.268852 2154 log.go:172] (0xc000a0e000) (5) Data frame handling\nI0423 00:12:37.268879 2154 log.go:172] (0xc000a0e000) (5) Data frame sent\nI0423 00:12:37.268898 2154 log.go:172] (0xc000940630) Data frame received for 5\nI0423 00:12:37.268913 2154 log.go:172] (0xc000a0e000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0423 00:12:37.268934 2154 log.go:172] (0xc000940630) Data frame received for 3\nI0423 00:12:37.269002 2154 log.go:172] (0xc0009900a0) (3) Data frame handling\nI0423 00:12:37.270639 2154 log.go:172] (0xc000940630) Data frame received for 1\nI0423 00:12:37.270660 2154 log.go:172] (0xc000990000) (1) Data frame handling\nI0423 00:12:37.270674 2154 log.go:172] (0xc000990000) (1) Data frame sent\nI0423 00:12:37.270686 2154 log.go:172] (0xc000940630) (0xc000990000) Stream removed, broadcasting: 1\nI0423 00:12:37.270741 2154 log.go:172] (0xc000940630) Go away received\nI0423 00:12:37.270971 2154 log.go:172] (0xc000940630) (0xc000990000) Stream removed, broadcasting: 1\nI0423 00:12:37.270982 2154 log.go:172] (0xc000940630) (0xc0009900a0) Stream removed, broadcasting: 3\nI0423 00:12:37.270987 2154 log.go:172] (0xc000940630) (0xc000a0e000) Stream removed, broadcasting: 5\n" Apr 23 00:12:37.275: INFO: stdout: "" Apr 23 00:12:37.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1852 execpodwrmtq -- /bin/sh -x -c nc -zv -t -w 2 10.96.212.57 80' Apr 23 00:12:37.469: INFO: stderr: "I0423 00:12:37.393896 2177 log.go:172] (0xc00098c000) (0xc000abe000) Create stream\nI0423 00:12:37.393976 2177 log.go:172] (0xc00098c000) (0xc000abe000) Stream added, broadcasting: 1\nI0423 00:12:37.396800 2177 log.go:172] (0xc00098c000) Reply frame received for 1\nI0423 00:12:37.396839 2177 log.go:172] (0xc00098c000) (0xc0006c72c0) Create stream\nI0423 00:12:37.396864 2177 log.go:172] (0xc00098c000) (0xc0006c72c0) Stream added, broadcasting: 3\nI0423 00:12:37.397872 2177 log.go:172] (0xc00098c000) Reply frame received for 3\nI0423 00:12:37.397909 2177 log.go:172] (0xc00098c000) (0xc000abe0a0) Create stream\nI0423 00:12:37.397927 2177 log.go:172] (0xc00098c000) (0xc000abe0a0) Stream added, broadcasting: 5\nI0423 00:12:37.398840 2177 log.go:172] (0xc00098c000) Reply frame received for 5\nI0423 00:12:37.464018 2177 log.go:172] (0xc00098c000) Data frame received for 3\nI0423 00:12:37.464050 2177 log.go:172] (0xc0006c72c0) (3) Data frame handling\nI0423 00:12:37.464080 2177 log.go:172] (0xc00098c000) Data frame received for 5\nI0423 00:12:37.464092 2177 log.go:172] (0xc000abe0a0) (5) Data frame handling\nI0423 00:12:37.464101 2177 log.go:172] (0xc000abe0a0) (5) Data frame sent\nI0423 00:12:37.464108 2177 log.go:172] (0xc00098c000) Data frame received for 5\nI0423 00:12:37.464115 2177 log.go:172] (0xc000abe0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.212.57 80\nConnection to 10.96.212.57 80 port [tcp/http] succeeded!\nI0423 00:12:37.465208 2177 log.go:172] (0xc00098c000) Data frame received for 1\nI0423 00:12:37.465275 2177 log.go:172] (0xc000abe000) (1) Data frame handling\nI0423 00:12:37.465295 2177 log.go:172] (0xc000abe000) (1) Data frame sent\nI0423 00:12:37.465307 2177 log.go:172] (0xc00098c000) (0xc000abe000) Stream removed, broadcasting: 1\nI0423 00:12:37.465340 2177 log.go:172] (0xc00098c000) Go away received\nI0423 00:12:37.465656 2177 log.go:172] (0xc00098c000) (0xc000abe000) Stream removed, broadcasting: 1\nI0423 00:12:37.465676 2177 log.go:172] (0xc00098c000) (0xc0006c72c0) Stream removed, broadcasting: 3\nI0423 00:12:37.465688 2177 log.go:172] (0xc00098c000) (0xc000abe0a0) Stream removed, broadcasting: 5\n" Apr 23 00:12:37.469: INFO: stdout: "" Apr 23 00:12:37.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1852 execpodwrmtq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30227' Apr 23 00:12:37.674: INFO: stderr: "I0423 00:12:37.607047 2197 log.go:172] (0xc00003a790) (0xc000538000) Create stream\nI0423 00:12:37.607108 2197 log.go:172] (0xc00003a790) (0xc000538000) Stream added, broadcasting: 1\nI0423 00:12:37.609520 2197 log.go:172] (0xc00003a790) Reply frame received for 1\nI0423 00:12:37.609589 2197 log.go:172] (0xc00003a790) (0xc0008d2000) Create stream\nI0423 00:12:37.609614 2197 log.go:172] (0xc00003a790) (0xc0008d2000) Stream added, broadcasting: 3\nI0423 00:12:37.610666 2197 log.go:172] (0xc00003a790) Reply frame received for 3\nI0423 00:12:37.610724 2197 log.go:172] (0xc00003a790) (0xc00081b2c0) Create stream\nI0423 00:12:37.610740 2197 log.go:172] (0xc00003a790) (0xc00081b2c0) Stream added, broadcasting: 5\nI0423 00:12:37.611791 2197 log.go:172] (0xc00003a790) Reply frame received for 5\nI0423 00:12:37.667314 2197 log.go:172] (0xc00003a790) Data frame received for 3\nI0423 00:12:37.667372 2197 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0423 00:12:37.667441 2197 log.go:172] (0xc00003a790) Data frame received for 5\nI0423 00:12:37.667480 2197 log.go:172] (0xc00081b2c0) (5) Data frame handling\nI0423 00:12:37.667508 2197 log.go:172] (0xc00081b2c0) (5) Data frame sent\nI0423 00:12:37.667523 2197 log.go:172] (0xc00003a790) Data frame received for 5\nI0423 00:12:37.667536 2197 log.go:172] (0xc00081b2c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30227\nConnection to 172.17.0.13 30227 port [tcp/30227] succeeded!\nI0423 00:12:37.669448 2197 log.go:172] (0xc00003a790) Data frame received for 1\nI0423 00:12:37.669479 2197 log.go:172] (0xc000538000) (1) Data frame handling\nI0423 00:12:37.669516 2197 log.go:172] (0xc000538000) (1) Data frame sent\nI0423 00:12:37.669536 2197 log.go:172] (0xc00003a790) (0xc000538000) Stream removed, broadcasting: 1\nI0423 00:12:37.669776 2197 log.go:172] (0xc00003a790) Go away received\nI0423 00:12:37.669903 2197 log.go:172] (0xc00003a790) (0xc000538000) Stream removed, broadcasting: 1\nI0423 00:12:37.669921 2197 log.go:172] (0xc00003a790) (0xc0008d2000) Stream removed, broadcasting: 3\nI0423 00:12:37.669931 2197 log.go:172] (0xc00003a790) (0xc00081b2c0) Stream removed, broadcasting: 5\n" Apr 23 00:12:37.674: INFO: stdout: "" Apr 23 00:12:37.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1852 execpodwrmtq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30227' Apr 23 00:12:37.871: INFO: stderr: "I0423 00:12:37.796050 2219 log.go:172] (0xc00096e580) (0xc000837220) Create stream\nI0423 00:12:37.796117 2219 log.go:172] (0xc00096e580) (0xc000837220) Stream added, broadcasting: 1\nI0423 00:12:37.798512 2219 log.go:172] (0xc00096e580) Reply frame received for 1\nI0423 00:12:37.798544 2219 log.go:172] (0xc00096e580) (0xc000a42000) Create stream\nI0423 00:12:37.798552 2219 log.go:172] (0xc00096e580) (0xc000a42000) Stream added, broadcasting: 3\nI0423 00:12:37.799533 2219 log.go:172] (0xc00096e580) Reply frame received for 3\nI0423 00:12:37.799583 2219 log.go:172] (0xc00096e580) (0xc0004be000) Create stream\nI0423 00:12:37.799596 2219 log.go:172] (0xc00096e580) (0xc0004be000) Stream added, broadcasting: 5\nI0423 00:12:37.800677 2219 log.go:172] (0xc00096e580) Reply frame received for 5\nI0423 00:12:37.864737 2219 log.go:172] (0xc00096e580) Data frame received for 5\nI0423 00:12:37.864782 2219 log.go:172] (0xc0004be000) (5) Data frame handling\nI0423 00:12:37.864797 2219 log.go:172] (0xc0004be000) (5) Data frame sent\nI0423 00:12:37.864809 2219 log.go:172] (0xc00096e580) Data frame received for 5\nI0423 00:12:37.864820 2219 log.go:172] (0xc0004be000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30227\nConnection to 172.17.0.12 30227 port [tcp/30227] succeeded!\nI0423 00:12:37.864843 2219 log.go:172] (0xc00096e580) Data frame received for 3\nI0423 00:12:37.864865 2219 log.go:172] (0xc000a42000) (3) Data frame handling\nI0423 00:12:37.866685 2219 log.go:172] (0xc00096e580) Data frame received for 1\nI0423 00:12:37.866716 2219 log.go:172] (0xc000837220) (1) Data frame handling\nI0423 00:12:37.866732 2219 log.go:172] (0xc000837220) (1) Data frame sent\nI0423 00:12:37.866760 2219 log.go:172] (0xc00096e580) (0xc000837220) Stream removed, broadcasting: 1\nI0423 00:12:37.867064 2219 log.go:172] (0xc00096e580) (0xc000837220) Stream removed, broadcasting: 1\nI0423 00:12:37.867082 2219 log.go:172] (0xc00096e580) (0xc000a42000) Stream removed, broadcasting: 3\nI0423 00:12:37.867091 2219 log.go:172] (0xc00096e580) (0xc0004be000) Stream removed, broadcasting: 5\n" Apr 23 00:12:37.871: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1852" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.345 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":152,"skipped":2634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:37.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:12:37.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3823" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":153,"skipped":2657,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:12:37.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-7m7w STEP: Creating a pod to test atomic-volume-subpath Apr 23 00:12:38.114: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7m7w" in namespace "subpath-4780" to be "Succeeded or Failed" Apr 23 00:12:38.117: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227448ms Apr 23 00:12:40.122: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007870269s Apr 23 00:12:42.126: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 4.012144107s Apr 23 00:12:44.130: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 6.016081459s Apr 23 00:12:46.134: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 8.020427163s Apr 23 00:12:48.138: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 10.024579558s Apr 23 00:12:50.143: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 12.029145695s Apr 23 00:12:52.148: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 14.033848736s Apr 23 00:12:54.151: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 16.037665676s Apr 23 00:12:56.155: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 18.041589362s Apr 23 00:12:58.159: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 20.045402308s Apr 23 00:13:00.163: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Running", Reason="", readiness=true. Elapsed: 22.049618162s Apr 23 00:13:02.168: INFO: Pod "pod-subpath-test-projected-7m7w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053921936s STEP: Saw pod success Apr 23 00:13:02.168: INFO: Pod "pod-subpath-test-projected-7m7w" satisfied condition "Succeeded or Failed" Apr 23 00:13:02.170: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-7m7w container test-container-subpath-projected-7m7w: STEP: delete the pod Apr 23 00:13:02.203: INFO: Waiting for pod pod-subpath-test-projected-7m7w to disappear Apr 23 00:13:02.207: INFO: Pod pod-subpath-test-projected-7m7w no longer exists STEP: Deleting pod pod-subpath-test-projected-7m7w Apr 23 00:13:02.207: INFO: Deleting pod "pod-subpath-test-projected-7m7w" in namespace "subpath-4780" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:13:02.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4780" for this suite. • [SLOW TEST:24.238 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":154,"skipped":2657,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:13:02.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-5a33e1d8-338c-493d-bb8d-9baa62ae65e6 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5a33e1d8-338c-493d-bb8d-9baa62ae65e6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:13:08.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9958" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2666,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:13:08.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:13:08.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee" in namespace "downward-api-3262" to be "Succeeded or Failed" Apr 23 00:13:08.441: INFO: Pod "downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.319487ms Apr 23 00:13:10.445: INFO: Pod "downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007905178s Apr 23 00:13:12.450: INFO: Pod "downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012426798s STEP: Saw pod success Apr 23 00:13:12.450: INFO: Pod "downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee" satisfied condition "Succeeded or Failed" Apr 23 00:13:12.453: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee container client-container: STEP: delete the pod Apr 23 00:13:12.472: INFO: Waiting for pod downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee to disappear Apr 23 00:13:12.477: INFO: Pod downwardapi-volume-2518493d-c2e3-4b1d-9e9d-b3670cf66bee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:13:12.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3262" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2670,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:13:12.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 23 00:13:12.556: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 00:13:12.567: INFO: Waiting for terminating namespaces to be deleted... Apr 23 00:13:12.570: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 23 00:13:12.575: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:13:12.575: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:13:12.575: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:13:12.575: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:13:12.575: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 23 00:13:12.580: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:13:12.580: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:13:12.580: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:13:12.580: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:13:12.580: INFO: pod-configmaps-ba0f3711-57f6-4d2f-8e83-615cb25a5b46 from configmap-9958 started at 2020-04-23 00:13:02 +0000 UTC (1 container statuses recorded) Apr 23 00:13:12.580: INFO: Container configmap-volume-test ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5a23bdc8-f136-4d8b-b22b-d061e7fa56f9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5a23bdc8-f136-4d8b-b22b-d061e7fa56f9 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5a23bdc8-f136-4d8b-b22b-d061e7fa56f9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:20.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6654" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.255 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":157,"skipped":2692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:20.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:18:20.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2" in namespace "projected-9576" to be "Succeeded or Failed" Apr 23 00:18:20.811: INFO: Pod "downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.424261ms Apr 23 00:18:22.814: INFO: Pod "downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006769624s Apr 23 00:18:24.818: INFO: Pod "downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010665759s STEP: Saw pod success Apr 23 00:18:24.818: INFO: Pod "downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2" satisfied condition "Succeeded or Failed" Apr 23 00:18:24.821: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2 container client-container: STEP: delete the pod Apr 23 00:18:24.877: INFO: Waiting for pod downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2 to disappear Apr 23 00:18:24.906: INFO: Pod downwardapi-volume-6f056ab2-1bb4-48d4-9d5f-8c968e613ca2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9576" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2718,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:24.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:41.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6315" for this suite. • [SLOW TEST:16.227 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":159,"skipped":2722,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:41.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-969245ae-35db-491c-ad78-2a18f971a05f STEP: Creating secret with name secret-projected-all-test-volume-60fdd1cf-91d5-40b8-9f09-7db3adb833ef STEP: Creating a pod to test Check all projections for projected volume plugin Apr 23 00:18:41.216: INFO: Waiting up to 5m0s for pod "projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5" in namespace "projected-6997" to be "Succeeded or Failed" Apr 23 00:18:41.228: INFO: Pod "projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.751142ms Apr 23 00:18:43.240: INFO: Pod "projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024360925s Apr 23 00:18:45.244: INFO: Pod "projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027854174s STEP: Saw pod success Apr 23 00:18:45.244: INFO: Pod "projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5" satisfied condition "Succeeded or Failed" Apr 23 00:18:45.247: INFO: Trying to get logs from node latest-worker pod projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5 container projected-all-volume-test: STEP: delete the pod Apr 23 00:18:45.281: INFO: Waiting for pod projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5 to disappear Apr 23 00:18:45.286: INFO: Pod projected-volume-ade9bfe2-57dc-40ef-a2b6-3eddab5a2bd5 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:45.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6997" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2731,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:45.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:18:49.470: INFO: Waiting up to 5m0s for pod "client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6" in namespace "pods-5457" to be "Succeeded or Failed" Apr 23 00:18:49.487: INFO: Pod "client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.514036ms Apr 23 00:18:51.582: INFO: Pod "client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111827829s Apr 23 00:18:53.586: INFO: Pod "client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115943127s STEP: Saw pod success Apr 23 00:18:53.586: INFO: Pod "client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6" satisfied condition "Succeeded or Failed" Apr 23 00:18:53.589: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6 container env3cont: STEP: delete the pod Apr 23 00:18:54.019: INFO: Waiting for pod client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6 to disappear Apr 23 00:18:54.022: INFO: Pod client-envvars-0a5a09ae-b10d-40a3-b806-920de12eb3e6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:54.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5457" for this suite. • [SLOW TEST:8.738 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2744,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:54.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 23 00:18:54.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 23 00:18:56.776: INFO: stderr: "" Apr 23 00:18:56.776: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:18:56.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1074" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":162,"skipped":2755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:18:56.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:18:56.852: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c" in namespace "projected-2608" to be "Succeeded or Failed" Apr 23 00:18:56.856: INFO: Pod "downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123808ms Apr 23 00:18:58.859: INFO: Pod "downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00689569s Apr 23 00:19:00.863: INFO: Pod "downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010178952s STEP: Saw pod success Apr 23 00:19:00.863: INFO: Pod "downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c" satisfied condition "Succeeded or Failed" Apr 23 00:19:00.866: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c container client-container: STEP: delete the pod Apr 23 00:19:00.907: INFO: Waiting for pod downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c to disappear Apr 23 00:19:00.921: INFO: Pod downwardapi-volume-c1ede48d-5039-464c-92ff-3927494d135c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:00.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2608" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:00.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 23 00:19:09.036: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:09.039: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:11.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:11.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:13.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:13.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:15.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:15.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:17.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:17.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:19.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:19.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:21.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:21.044: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 00:19:23.040: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 00:19:23.044: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:23.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5472" for this suite. • [SLOW TEST:22.163 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2817,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:23.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 00:19:26.194: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:26.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5821" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2828,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:26.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7549" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":166,"skipped":2832,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:26.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:19:27.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:19:29.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197967, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197967, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197967, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723197967, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:19:32.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 23 00:19:36.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-357 to-be-attached-pod -i -c=container1' Apr 23 00:19:36.264: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:36.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-357" for this suite. STEP: Destroying namespace "webhook-357-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":167,"skipped":2834,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:36.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:19:47.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7425" for this suite. • [SLOW TEST:11.160 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":168,"skipped":2851,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:19:47.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 23 00:19:47.571: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 00:19:47.582: INFO: Waiting for terminating namespaces to be deleted... Apr 23 00:19:47.584: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 23 00:19:47.589: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:19:47.589: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:19:47.589: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:19:47.589: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:19:47.589: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 23 00:19:47.603: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:19:47.603: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:19:47.603: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:19:47.603: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:19:47.603: INFO: to-be-attached-pod from webhook-357 started at 2020-04-23 00:19:32 +0000 UTC (1 container statuses recorded) Apr 23 00:19:47.603: INFO: Container container1 ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-79db3a84-da45-4644-be7f-5f5ccb5e7ebb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-79db3a84-da45-4644-be7f-5f5ccb5e7ebb off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-79db3a84-da45-4644-be7f-5f5ccb5e7ebb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:20:03.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-731" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.285 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":169,"skipped":2864,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:20:03.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-cab086f9-0747-47c9-bcdb-99c2114c3261 STEP: Creating a pod to test consume configMaps Apr 23 00:20:03.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208" in namespace "configmap-9244" to be "Succeeded or Failed" Apr 23 00:20:03.888: INFO: Pod "pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208": Phase="Pending", Reason="", readiness=false. Elapsed: 25.074776ms Apr 23 00:20:05.892: INFO: Pod "pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029425141s Apr 23 00:20:07.897: INFO: Pod "pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034290922s STEP: Saw pod success Apr 23 00:20:07.897: INFO: Pod "pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208" satisfied condition "Succeeded or Failed" Apr 23 00:20:07.900: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208 container configmap-volume-test: STEP: delete the pod Apr 23 00:20:07.919: INFO: Waiting for pod pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208 to disappear Apr 23 00:20:07.934: INFO: Pod pod-configmaps-941027e9-d6e7-4459-8ddf-966d90f39208 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:20:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9244" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:20:07.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0423 00:20:49.350999 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 00:20:49.351: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:20:49.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2696" for this suite. • [SLOW TEST:41.448 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":171,"skipped":2920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:20:49.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-859dda34-78e1-40e5-81c6-46b4e8a80743 STEP: Creating a pod to test consume configMaps Apr 23 00:20:49.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-330c6975-6095-43dc-826e-0900074f728c" in namespace "configmap-3804" to be "Succeeded or Failed" Apr 23 00:20:49.526: INFO: Pod "pod-configmaps-330c6975-6095-43dc-826e-0900074f728c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.909959ms Apr 23 00:20:51.626: INFO: Pod "pod-configmaps-330c6975-6095-43dc-826e-0900074f728c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116093599s Apr 23 00:20:53.630: INFO: Pod "pod-configmaps-330c6975-6095-43dc-826e-0900074f728c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119959159s STEP: Saw pod success Apr 23 00:20:53.630: INFO: Pod "pod-configmaps-330c6975-6095-43dc-826e-0900074f728c" satisfied condition "Succeeded or Failed" Apr 23 00:20:53.633: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-330c6975-6095-43dc-826e-0900074f728c container configmap-volume-test: STEP: delete the pod Apr 23 00:20:53.756: INFO: Waiting for pod pod-configmaps-330c6975-6095-43dc-826e-0900074f728c to disappear Apr 23 00:20:53.760: INFO: Pod pod-configmaps-330c6975-6095-43dc-826e-0900074f728c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:20:53.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3804" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2955,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:20:53.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 23 00:20:54.276: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 23 00:20:56.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:20:58.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198054, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:21:01.447: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:21:01.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:02.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2761" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.032 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":173,"skipped":2957,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:02.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:21:02.840: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 23 00:21:04.889: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:05.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6411" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":174,"skipped":2973,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:05.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 23 00:21:06.159: INFO: Waiting up to 5m0s for pod "client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba" in namespace "containers-9555" to be "Succeeded or Failed" Apr 23 00:21:06.162: INFO: Pod "client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665788ms Apr 23 00:21:08.171: INFO: Pod "client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011996139s Apr 23 00:21:10.175: INFO: Pod "client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016387941s STEP: Saw pod success Apr 23 00:21:10.175: INFO: Pod "client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba" satisfied condition "Succeeded or Failed" Apr 23 00:21:10.179: INFO: Trying to get logs from node latest-worker2 pod client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba container test-container: STEP: delete the pod Apr 23 00:21:10.208: INFO: Waiting for pod client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba to disappear Apr 23 00:21:10.237: INFO: Pod client-containers-4a824466-09f3-4fe6-840e-c6a726cc13ba no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:10.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9555" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2978,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:10.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:21:10.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3" in namespace "downward-api-2125" to be "Succeeded or Failed" Apr 23 00:21:10.303: INFO: Pod "downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274587ms Apr 23 00:21:12.308: INFO: Pod "downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008486488s Apr 23 00:21:14.312: INFO: Pod "downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012992532s STEP: Saw pod success Apr 23 00:21:14.312: INFO: Pod "downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3" satisfied condition "Succeeded or Failed" Apr 23 00:21:14.316: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3 container client-container: STEP: delete the pod Apr 23 00:21:14.383: INFO: Waiting for pod downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3 to disappear Apr 23 00:21:14.399: INFO: Pod downwardapi-volume-74c1315e-0ffa-4a01-b6d8-71552b8646c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:14.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2125" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:14.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2624.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2624.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2624.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:21:20.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.539: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.541: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.544: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.550: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.552: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.554: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.557: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:20.561: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:25.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.577: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.588: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.592: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.595: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.598: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:25.605: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:30.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.576: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.584: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.586: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.588: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.591: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:30.596: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:35.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.578: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.586: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.588: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.590: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.592: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:35.597: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:40.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.577: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.590: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.593: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.596: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:40.602: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:45.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.574: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.577: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.586: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.589: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.592: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.595: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local from pod dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737: the server could not find the requested resource (get pods dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737) Apr 23 00:21:45.604: INFO: Lookups using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2624.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2624.svc.cluster.local jessie_udp@dns-test-service-2.dns-2624.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2624.svc.cluster.local] Apr 23 00:21:50.600: INFO: DNS probes using dns-2624/dns-test-ea3f44b8-2f1e-4b1f-9228-6702d9ac0737 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:50.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2624" for this suite. • [SLOW TEST:36.718 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":177,"skipped":3022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:51.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:21:51.980: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 23 00:21:54.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198112, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198112, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198112, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198111, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:21:57.054: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:21:57.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3147" for this suite. STEP: Destroying namespace "webhook-3147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.069 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":178,"skipped":3063,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:21:57.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:01.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2325" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:01.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:05.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7383" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":180,"skipped":3107,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:05.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d049dbc0-4e2f-431a-8391-1fad07e99c54 STEP: Creating a pod to test consume configMaps Apr 23 00:22:05.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686" in namespace "configmap-4069" to be "Succeeded or Failed" Apr 23 00:22:05.617: INFO: Pod "pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686": Phase="Pending", Reason="", readiness=false. Elapsed: 76.80768ms Apr 23 00:22:07.639: INFO: Pod "pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098557059s Apr 23 00:22:09.643: INFO: Pod "pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102754285s STEP: Saw pod success Apr 23 00:22:09.643: INFO: Pod "pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686" satisfied condition "Succeeded or Failed" Apr 23 00:22:09.646: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686 container configmap-volume-test: STEP: delete the pod Apr 23 00:22:09.677: INFO: Waiting for pod pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686 to disappear Apr 23 00:22:09.704: INFO: Pod pod-configmaps-923e8627-f18c-466a-ab3b-e0d1633c1686 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:09.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4069" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:09.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 23 00:22:09.777: INFO: Waiting up to 5m0s for pod "var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c" in namespace "var-expansion-6179" to be "Succeeded or Failed" Apr 23 00:22:09.793: INFO: Pod "var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.832715ms Apr 23 00:22:11.797: INFO: Pod "var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019759891s Apr 23 00:22:13.800: INFO: Pod "var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023492574s STEP: Saw pod success Apr 23 00:22:13.800: INFO: Pod "var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c" satisfied condition "Succeeded or Failed" Apr 23 00:22:13.803: INFO: Trying to get logs from node latest-worker2 pod var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c container dapi-container: STEP: delete the pod Apr 23 00:22:13.830: INFO: Waiting for pod var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c to disappear Apr 23 00:22:13.834: INFO: Pod var-expansion-158804f2-2077-47ed-a59a-fb3a8c2b452c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:13.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6179" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3190,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:13.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:13.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2095" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3192,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:14.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d Apr 23 00:22:14.200: INFO: Pod name my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d: Found 0 pods out of 1 Apr 23 00:22:19.202: INFO: Pod name my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d: Found 1 pods out of 1 Apr 23 00:22:19.202: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d" are running Apr 23 00:22:19.205: INFO: Pod "my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d-wr8cp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:22:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:22:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:22:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 00:22:14 +0000 UTC Reason: Message:}]) Apr 23 00:22:19.205: INFO: Trying to dial the pod Apr 23 00:22:24.250: INFO: Controller my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d: Got expected result from replica 1 [my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d-wr8cp]: "my-hostname-basic-4f680193-1863-44d8-ad55-52479f42af1d-wr8cp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:24.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6511" for this suite. • [SLOW TEST:10.224 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":184,"skipped":3209,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:24.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ce0b5e50-a89a-4e1a-adb8-fb65240fe19e STEP: Creating a pod to test consume configMaps Apr 23 00:22:24.337: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43" in namespace "projected-3982" to be "Succeeded or Failed" Apr 23 00:22:24.348: INFO: Pod "pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43": Phase="Pending", Reason="", readiness=false. Elapsed: 10.555515ms Apr 23 00:22:26.351: INFO: Pod "pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014284188s Apr 23 00:22:28.355: INFO: Pod "pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018417381s STEP: Saw pod success Apr 23 00:22:28.356: INFO: Pod "pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43" satisfied condition "Succeeded or Failed" Apr 23 00:22:28.359: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43 container projected-configmap-volume-test: STEP: delete the pod Apr 23 00:22:28.391: INFO: Waiting for pod pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43 to disappear Apr 23 00:22:28.406: INFO: Pod pod-projected-configmaps-d9ea6f86-b0d2-4489-8cf0-293ef2975a43 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:28.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3982" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3214,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:28.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:22:29.095: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:22:31.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:22:33.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198149, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:22:36.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:36.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6930" for this suite. STEP: Destroying namespace "webhook-6930-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.050 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":186,"skipped":3220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:36.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-78288a2d-93cd-4b60-babd-2ee5a1292ed2 STEP: Creating a pod to test consume configMaps Apr 23 00:22:36.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c" in namespace "configmap-7318" to be "Succeeded or Failed" Apr 23 00:22:36.629: INFO: Pod "pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.953977ms Apr 23 00:22:38.633: INFO: Pod "pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007164097s Apr 23 00:22:40.637: INFO: Pod "pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011701507s STEP: Saw pod success Apr 23 00:22:40.637: INFO: Pod "pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c" satisfied condition "Succeeded or Failed" Apr 23 00:22:40.640: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c container configmap-volume-test: STEP: delete the pod Apr 23 00:22:40.660: INFO: Waiting for pod pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c to disappear Apr 23 00:22:40.665: INFO: Pod pod-configmaps-0297fbe1-3c75-41d4-b4a5-74a3d314ae9c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:40.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7318" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3243,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:40.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:22:40.734: INFO: Creating deployment "webserver-deployment" Apr 23 00:22:40.750: INFO: Waiting for observed generation 1 Apr 23 00:22:42.771: INFO: Waiting for all required pods to come up Apr 23 00:22:42.776: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 23 00:22:52.787: INFO: Waiting for deployment "webserver-deployment" to complete Apr 23 00:22:52.792: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 23 00:22:52.798: INFO: Updating deployment webserver-deployment Apr 23 00:22:52.798: INFO: Waiting for observed generation 2 Apr 23 00:22:54.807: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 23 00:22:54.810: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 23 00:22:54.812: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 23 00:22:54.818: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 23 00:22:54.818: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 23 00:22:54.820: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 23 00:22:54.823: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 23 00:22:54.823: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 23 00:22:54.828: INFO: Updating deployment webserver-deployment Apr 23 00:22:54.828: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 23 00:22:55.010: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 23 00:22:55.185: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 23 00:22:57.570: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5272 /apis/apps/v1/namespaces/deployment-5272/deployments/webserver-deployment 5e209ace-870c-43bb-847b-6533bca8828f 10261518 3 2020-04-23 00:22:40 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034be038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-23 00:22:54 +0000 UTC,LastTransitionTime:2020-04-23 00:22:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-23 00:22:55 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 23 00:22:57.574: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5272 /apis/apps/v1/namespaces/deployment-5272/replicasets/webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 10261515 3 2020-04-23 00:22:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5e209ace-870c-43bb-847b-6533bca8828f 0xc0035b16d7 0xc0035b16d8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035b1748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:22:57.574: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 23 00:22:57.574: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5272 /apis/apps/v1/namespaces/deployment-5272/replicasets/webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 10261504 3 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5e209ace-870c-43bb-847b-6533bca8828f 0xc0035b1617 0xc0035b1618}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035b1678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-25j7z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-25j7z webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-25j7z cd247083-e83e-4b61-9c94-276458153abf 10261365 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034be4d7 0xc0034be4d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.213,StartTime:2020-04-23 00:22:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d83ed12525781bf7e479398a864327df14697a6e291c92e5640c38982d9b0127,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-4flz7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4flz7 webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-4flz7 ec3a4423-d35b-4f11-a5a5-07e35f1f3be3 10261299 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034be657 0xc0034be658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.209,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://195ebbfaa6cba64eb929e5b21fe62b670534a5959d3c0867533112eef950071a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-7hlbq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7hlbq webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-7hlbq 77984985-cf8d-4b90-8774-6bb923e7a278 10261494 0 2020-04-23 00:22:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034be7f7 0xc0034be7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-7wgmg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7wgmg webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-7wgmg 98e13429-1146-4d6a-97b3-80edd4b1672d 10261519 0 2020-04-23 00:22:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034be957 0xc0034be958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-896x9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-896x9 webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-896x9 22c90e32-e3ba-43a9-9333-562b75504be7 10261344 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034beab7 0xc0034beab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.40,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7049f07f28ad522b47414a0a980626a8ec96ba7eee6aceefc031c40b9333bec2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.631: INFO: Pod "webserver-deployment-595b5b9587-8kjw6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8kjw6 webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-8kjw6 6bb19ade-d34d-4b29-9a5f-8eaeafa27e70 10261542 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bec37 0xc0034bec38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.632: INFO: Pod "webserver-deployment-595b5b9587-9xzvz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9xzvz webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-9xzvz 3dba0f3f-84d8-4510-9a9e-6ca97384e960 10261531 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bed97 0xc0034bed98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.632: INFO: Pod "webserver-deployment-595b5b9587-b5jkr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b5jkr webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-b5jkr aafbfcbc-b4a5-422b-9228-841120ee5214 10261530 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034beef7 0xc0034beef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.632: INFO: Pod "webserver-deployment-595b5b9587-b5p7g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b5p7g webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-b5p7g cffe5c0a-2117-40c2-b419-dc023c28df07 10261351 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf057 0xc0034bf058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.41,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89aed407b1c21d02d62a0e914914140a037341fdade358327dd1e1232ce3a200,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.632: INFO: Pod "webserver-deployment-595b5b9587-dgjzf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dgjzf webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-dgjzf b9ed9354-627f-436a-8e8b-3a5c424ff1af 10261566 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf1d7 0xc0034bf1d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.632: INFO: Pod "webserver-deployment-595b5b9587-g86gl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g86gl webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-g86gl be625280-2578-494e-8ef8-ac07dc8b7b52 10261340 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf337 0xc0034bf338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.211,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1e8b703fa4d3131f18724fa32b1c87aa49a007af7f007d415ef4f194002f82e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.633: INFO: Pod "webserver-deployment-595b5b9587-k4mjp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k4mjp webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-k4mjp d6a6d80c-4793-4447-8ba4-b43877857867 10261547 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf4b7 0xc0034bf4b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.633: INFO: Pod "webserver-deployment-595b5b9587-k9tm2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k9tm2 webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-k9tm2 e5e8186f-0821-40a1-ae91-202586cea694 10261524 0 2020-04-23 00:22:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf617 0xc0034bf618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.633: INFO: Pod "webserver-deployment-595b5b9587-pwxmf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pwxmf webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-pwxmf cd728dea-1baf-4d81-9d96-34d2e7dcb493 10261575 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf777 0xc0034bf778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.634: INFO: Pod "webserver-deployment-595b5b9587-rqph5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rqph5 webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-rqph5 dac26eef-a7a1-43bc-9e17-56445849e5c3 10261308 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bf8d7 0xc0034bf8d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.39,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://49e4ed81f7e5bde054e6740549d003bfadecd07619de0febfc3b47ea604d7c3f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.634: INFO: Pod "webserver-deployment-595b5b9587-rsf7l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rsf7l webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-rsf7l 982d85bf-c8be-4617-a562-1db774811c4e 10261533 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bfa57 0xc0034bfa58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.634: INFO: Pod "webserver-deployment-595b5b9587-v8tlm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v8tlm webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-v8tlm e9a507db-1575-43d4-9449-a54ee5b1dd2d 10261321 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bfbb7 0xc0034bfbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.210,StartTime:2020-04-23 00:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dcc9d6055b49ce752543b17136d4fdb3212fc40044744c1366795e82f322280b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.634: INFO: Pod "webserver-deployment-595b5b9587-xjjjt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xjjjt webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-xjjjt aeaca388-43b9-4c9b-baf4-34a2b4105820 10261521 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bfd37 0xc0034bfd38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.634: INFO: Pod "webserver-deployment-595b5b9587-z72kg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z72kg webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-z72kg 1fcdd511-3a5c-47f5-9803-5625cf3b5bac 10261362 0 2020-04-23 00:22:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034bfe97 0xc0034bfe98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.212,StartTime:2020-04-23 00:22:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:22:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4352a61106ae314d036789ecec8215347e5314df00d88ab8347a28cbf70ab812,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-595b5b9587-zrvvc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zrvvc webserver-deployment-595b5b9587- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-595b5b9587-zrvvc dea920a5-e3fb-4686-9357-4b0bd95a11c5 10261534 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 ffb06083-8949-40d4-8be3-5e8d77ad56f3 0xc0034e2017 0xc0034e2018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-c7997dcc8-6s7c6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6s7c6 webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-6s7c6 3de8cd29-62b2-4149-886a-1f6559864e13 10261528 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2177 0xc0034e2178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-c7997dcc8-79kzx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-79kzx webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-79kzx a8e3037c-a593-406d-b187-5ce4e6cf7654 10261432 0 2020-04-23 00:22:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e22f7 0xc0034e22f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-c7997dcc8-82xdm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-82xdm webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-82xdm 402614a9-0541-46e5-8f93-884824780133 10261431 0 2020-04-23 00:22:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2477 0xc0034e2478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-c7997dcc8-cjd6l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cjd6l webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-cjd6l eb654b16-f855-44ea-a5c5-d87bff13b8b5 10261569 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e25f7 0xc0034e25f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.635: INFO: Pod "webserver-deployment-c7997dcc8-gfs6z" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gfs6z webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-gfs6z bd23f5cc-5f50-4eb3-bc05-f4377bc7c5be 10261425 0 2020-04-23 00:22:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2777 0xc0034e2778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.641: INFO: Pod "webserver-deployment-c7997dcc8-hffkv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hffkv webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-hffkv ab7bda09-7156-429f-8287-f077cc4b4a25 10261526 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e28f7 0xc0034e28f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.642: INFO: Pod "webserver-deployment-c7997dcc8-m9mms" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m9mms webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-m9mms cdd8390e-4bad-4f81-b86c-baa8123909f6 10261559 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2a77 0xc0034e2a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.642: INFO: Pod "webserver-deployment-c7997dcc8-qjw8v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qjw8v webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-qjw8v 909077fe-487c-49b1-911f-5b8b06d71342 10261538 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2bf7 0xc0034e2bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.642: INFO: Pod "webserver-deployment-c7997dcc8-rmr72" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rmr72 webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-rmr72 aa966f31-c19b-4286-973e-8bb6c5ab1406 10261577 0 2020-04-23 00:22:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2d77 0xc0034e2d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.214,StartTime:2020-04-23 00:22:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.643: INFO: Pod "webserver-deployment-c7997dcc8-skdvs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-skdvs webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-skdvs c56d8d6b-2976-47dc-87d0-6bbb7383578e 10261574 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e2f27 0xc0034e2f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.643: INFO: Pod "webserver-deployment-c7997dcc8-t8m7g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t8m7g webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-t8m7g 2182d46d-44e0-4267-adb9-7aa6020617c6 10261562 0 2020-04-23 00:22:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e30a7 0xc0034e30a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.644: INFO: Pod "webserver-deployment-c7997dcc8-wg9tm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wg9tm webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-wg9tm 83a77447-4aaa-46aa-be51-0b12d8513812 10261505 0 2020-04-23 00:22:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e3227 0xc0034e3228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-23 00:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:22:57.644: INFO: Pod "webserver-deployment-c7997dcc8-zfdjv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zfdjv webserver-deployment-c7997dcc8- deployment-5272 /api/v1/namespaces/deployment-5272/pods/webserver-deployment-c7997dcc8-zfdjv 917a608b-a09a-4eaf-af6a-5b967d02371a 10261402 0 2020-04-23 00:22:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6a9c815-357f-4dba-bd53-0b505829f3e1 0xc0034e33a7 0xc0034e33a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wss7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wss7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wss7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-23 00:22:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:22:57.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5272" for this suite. • [SLOW TEST:17.138 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":188,"skipped":3244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:22:57.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:23:11.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8092" for this suite. • [SLOW TEST:13.600 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":189,"skipped":3275,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:23:11.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 23 00:23:12.049: INFO: Waiting up to 5m0s for pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc" in namespace "containers-2455" to be "Succeeded or Failed" Apr 23 00:23:12.077: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.635571ms Apr 23 00:23:14.252: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203099744s Apr 23 00:23:16.256: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207055723s Apr 23 00:23:18.259: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Running", Reason="", readiness=true. Elapsed: 6.210274287s Apr 23 00:23:20.263: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Running", Reason="", readiness=true. Elapsed: 8.213940648s Apr 23 00:23:22.268: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21829637s STEP: Saw pod success Apr 23 00:23:22.268: INFO: Pod "client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc" satisfied condition "Succeeded or Failed" Apr 23 00:23:22.271: INFO: Trying to get logs from node latest-worker2 pod client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc container test-container: STEP: delete the pod Apr 23 00:23:22.292: INFO: Waiting for pod client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc to disappear Apr 23 00:23:22.296: INFO: Pod client-containers-29f7d03c-faab-4c78-988a-d4ee86c48acc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:23:22.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2455" for this suite. • [SLOW TEST:10.895 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3276,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:23:22.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-d295bd1b-17b7-4d0c-a83f-04038b40fed1 in namespace container-probe-2415 Apr 23 00:23:26.400: INFO: Started pod test-webserver-d295bd1b-17b7-4d0c-a83f-04038b40fed1 in namespace container-probe-2415 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 00:23:26.403: INFO: Initial restart count of pod test-webserver-d295bd1b-17b7-4d0c-a83f-04038b40fed1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:27:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2415" for this suite. • [SLOW TEST:245.066 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3279,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:27:27.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-c496dc45-c054-43cb-9107-ef125b52bdb8 in namespace container-probe-5787 Apr 23 00:27:31.465: INFO: Started pod liveness-c496dc45-c054-43cb-9107-ef125b52bdb8 in namespace container-probe-5787 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 00:27:31.468: INFO: Initial restart count of pod liveness-c496dc45-c054-43cb-9107-ef125b52bdb8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:31:32.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5787" for this suite. • [SLOW TEST:244.707 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3284,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:31:32.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 23 00:31:33.028: INFO: Pod name wrapped-volume-race-4c6dfd6e-f071-4dba-b479-e8fe85d7ffc0: Found 0 pods out of 5 Apr 23 00:31:38.045: INFO: Pod name wrapped-volume-race-4c6dfd6e-f071-4dba-b479-e8fe85d7ffc0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4c6dfd6e-f071-4dba-b479-e8fe85d7ffc0 in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Apr 23 00:31:50.150: INFO: Deleting ReplicationController wrapped-volume-race-4c6dfd6e-f071-4dba-b479-e8fe85d7ffc0 took: 11.90903ms Apr 23 00:31:50.450: INFO: Terminating ReplicationController wrapped-volume-race-4c6dfd6e-f071-4dba-b479-e8fe85d7ffc0 pods took: 300.264186ms STEP: Creating RC which spawns configmap-volume pods Apr 23 00:32:02.985: INFO: Pod name wrapped-volume-race-af315f68-9554-47de-8d0d-e6bb2b1c4728: Found 0 pods out of 5 Apr 23 00:32:07.994: INFO: Pod name wrapped-volume-race-af315f68-9554-47de-8d0d-e6bb2b1c4728: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-af315f68-9554-47de-8d0d-e6bb2b1c4728 in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Apr 23 00:32:22.103: INFO: Deleting ReplicationController wrapped-volume-race-af315f68-9554-47de-8d0d-e6bb2b1c4728 took: 6.350023ms Apr 23 00:32:22.503: INFO: Terminating ReplicationController wrapped-volume-race-af315f68-9554-47de-8d0d-e6bb2b1c4728 pods took: 400.25477ms STEP: Creating RC which spawns configmap-volume pods Apr 23 00:32:33.050: INFO: Pod name wrapped-volume-race-8e2e5178-a9d4-4c77-bd9f-69f12297a446: Found 0 pods out of 5 Apr 23 00:32:38.059: INFO: Pod name wrapped-volume-race-8e2e5178-a9d4-4c77-bd9f-69f12297a446: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8e2e5178-a9d4-4c77-bd9f-69f12297a446 in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Apr 23 00:32:52.195: INFO: Deleting ReplicationController wrapped-volume-race-8e2e5178-a9d4-4c77-bd9f-69f12297a446 took: 17.956713ms Apr 23 00:32:52.595: INFO: Terminating ReplicationController wrapped-volume-race-8e2e5178-a9d4-4c77-bd9f-69f12297a446 pods took: 400.306537ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:04.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6333" for this suite. • [SLOW TEST:92.612 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":193,"skipped":3286,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:04.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 00:33:08.803: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:08.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3332" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:08.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 23 00:33:09.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8096' Apr 23 00:33:12.350: INFO: stderr: "" Apr 23 00:33:12.350: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 23 00:33:13.354: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:13.354: INFO: Found 0 / 1 Apr 23 00:33:14.354: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:14.354: INFO: Found 0 / 1 Apr 23 00:33:15.355: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:15.355: INFO: Found 0 / 1 Apr 23 00:33:16.355: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:16.355: INFO: Found 1 / 1 Apr 23 00:33:16.355: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 23 00:33:16.359: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:16.359: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 23 00:33:16.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-6hjmp --namespace=kubectl-8096 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 23 00:33:16.486: INFO: stderr: "" Apr 23 00:33:16.486: INFO: stdout: "pod/agnhost-master-6hjmp patched\n" STEP: checking annotations Apr 23 00:33:16.495: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:33:16.495: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:16.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8096" for this suite. • [SLOW TEST:7.561 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":195,"skipped":3320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:16.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-8d67bae4-3af8-4d7f-9e8b-a013e84f627b STEP: Creating a pod to test consume secrets Apr 23 00:33:16.641: INFO: Waiting up to 5m0s for pod "pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71" in namespace "secrets-4094" to be "Succeeded or Failed" Apr 23 00:33:16.669: INFO: Pod "pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71": Phase="Pending", Reason="", readiness=false. Elapsed: 27.792629ms Apr 23 00:33:18.708: INFO: Pod "pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066088661s Apr 23 00:33:20.711: INFO: Pod "pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069646032s STEP: Saw pod success Apr 23 00:33:20.711: INFO: Pod "pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71" satisfied condition "Succeeded or Failed" Apr 23 00:33:20.713: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71 container secret-volume-test: STEP: delete the pod Apr 23 00:33:20.766: INFO: Waiting for pod pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71 to disappear Apr 23 00:33:20.777: INFO: Pod pod-secrets-ad43ebd7-a485-4b96-8a90-329e50efcd71 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:20.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4094" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3388,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:20.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 23 00:33:20.857: INFO: Waiting up to 5m0s for pod "pod-0038b9b7-7345-40db-961f-8eb26711301a" in namespace "emptydir-6626" to be "Succeeded or Failed" Apr 23 00:33:20.861: INFO: Pod "pod-0038b9b7-7345-40db-961f-8eb26711301a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.878007ms Apr 23 00:33:22.866: INFO: Pod "pod-0038b9b7-7345-40db-961f-8eb26711301a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008343396s Apr 23 00:33:24.870: INFO: Pod "pod-0038b9b7-7345-40db-961f-8eb26711301a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012333847s STEP: Saw pod success Apr 23 00:33:24.870: INFO: Pod "pod-0038b9b7-7345-40db-961f-8eb26711301a" satisfied condition "Succeeded or Failed" Apr 23 00:33:24.872: INFO: Trying to get logs from node latest-worker2 pod pod-0038b9b7-7345-40db-961f-8eb26711301a container test-container: STEP: delete the pod Apr 23 00:33:24.900: INFO: Waiting for pod pod-0038b9b7-7345-40db-961f-8eb26711301a to disappear Apr 23 00:33:24.929: INFO: Pod pod-0038b9b7-7345-40db-961f-8eb26711301a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:24.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6626" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3403,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:24.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-3362 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3362 STEP: Deleting pre-stop pod Apr 23 00:33:38.098: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:38.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3362" for this suite. • [SLOW TEST:13.175 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":198,"skipped":3405,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:38.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 23 00:33:44.895: INFO: 1 pods remaining Apr 23 00:33:44.895: INFO: 0 pods has nil DeletionTimestamp Apr 23 00:33:44.895: INFO: Apr 23 00:33:46.325: INFO: 0 pods remaining Apr 23 00:33:46.325: INFO: 0 pods has nil DeletionTimestamp Apr 23 00:33:46.325: INFO: STEP: Gathering metrics W0423 00:33:47.374864 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 00:33:47.374: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:47.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6699" for this suite. • [SLOW TEST:9.353 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":199,"skipped":3407,"failed":0} [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:47.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 23 00:33:48.398: INFO: Waiting up to 5m0s for pod "downward-api-356afc9b-a73d-434e-9092-368ae18bacbc" in namespace "downward-api-157" to be "Succeeded or Failed" Apr 23 00:33:48.631: INFO: Pod "downward-api-356afc9b-a73d-434e-9092-368ae18bacbc": Phase="Pending", Reason="", readiness=false. Elapsed: 232.712552ms Apr 23 00:33:50.635: INFO: Pod "downward-api-356afc9b-a73d-434e-9092-368ae18bacbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23691746s Apr 23 00:33:52.640: INFO: Pod "downward-api-356afc9b-a73d-434e-9092-368ae18bacbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241861556s STEP: Saw pod success Apr 23 00:33:52.640: INFO: Pod "downward-api-356afc9b-a73d-434e-9092-368ae18bacbc" satisfied condition "Succeeded or Failed" Apr 23 00:33:52.672: INFO: Trying to get logs from node latest-worker pod downward-api-356afc9b-a73d-434e-9092-368ae18bacbc container dapi-container: STEP: delete the pod Apr 23 00:33:52.748: INFO: Waiting for pod downward-api-356afc9b-a73d-434e-9092-368ae18bacbc to disappear Apr 23 00:33:52.754: INFO: Pod downward-api-356afc9b-a73d-434e-9092-368ae18bacbc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:52.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-157" for this suite. • [SLOW TEST:5.364 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3407,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:52.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:33:57.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4612" for this suite. • [SLOW TEST:5.155 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":201,"skipped":3419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:33:57.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 23 00:33:58.051: INFO: Waiting up to 5m0s for pod "downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e" in namespace "downward-api-4588" to be "Succeeded or Failed" Apr 23 00:33:58.054: INFO: Pod "downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157687ms Apr 23 00:34:00.057: INFO: Pod "downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006292341s Apr 23 00:34:02.061: INFO: Pod "downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010348343s STEP: Saw pod success Apr 23 00:34:02.061: INFO: Pod "downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e" satisfied condition "Succeeded or Failed" Apr 23 00:34:02.064: INFO: Trying to get logs from node latest-worker pod downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e container dapi-container: STEP: delete the pod Apr 23 00:34:02.148: INFO: Waiting for pod downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e to disappear Apr 23 00:34:02.152: INFO: Pod downward-api-560f33cb-f1eb-4bdc-b9a6-89d0d8685c7e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:02.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4588" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:02.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:19.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1394" for this suite. • [SLOW TEST:17.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":203,"skipped":3492,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:19.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-8913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8913 to expose endpoints map[] Apr 23 00:34:19.503: INFO: Get endpoints failed (23.415825ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 23 00:34:20.509: INFO: successfully validated that service multi-endpoint-test in namespace services-8913 exposes endpoints map[] (1.029686096s elapsed) STEP: Creating pod pod1 in namespace services-8913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8913 to expose endpoints map[pod1:[100]] Apr 23 00:34:23.579: INFO: successfully validated that service multi-endpoint-test in namespace services-8913 exposes endpoints map[pod1:[100]] (3.062622047s elapsed) STEP: Creating pod pod2 in namespace services-8913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8913 to expose endpoints map[pod1:[100] pod2:[101]] Apr 23 00:34:26.682: INFO: successfully validated that service multi-endpoint-test in namespace services-8913 exposes endpoints map[pod1:[100] pod2:[101]] (3.099294026s elapsed) STEP: Deleting pod pod1 in namespace services-8913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8913 to expose endpoints map[pod2:[101]] Apr 23 00:34:27.386: INFO: successfully validated that service multi-endpoint-test in namespace services-8913 exposes endpoints map[pod2:[101]] (688.474375ms elapsed) STEP: Deleting pod pod2 in namespace services-8913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8913 to expose endpoints map[] Apr 23 00:34:28.408: INFO: successfully validated that service multi-endpoint-test in namespace services-8913 exposes endpoints map[] (1.017075664s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:28.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8913" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.126 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":204,"skipped":3498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:28.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b3320bfa-c518-4480-9dac-db7ddc06c8f4 STEP: Creating a pod to test consume configMaps Apr 23 00:34:28.739: INFO: Waiting up to 5m0s for pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56" in namespace "configmap-7464" to be "Succeeded or Failed" Apr 23 00:34:28.746: INFO: Pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928079ms Apr 23 00:34:30.749: INFO: Pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010719374s Apr 23 00:34:32.780: INFO: Pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041848924s Apr 23 00:34:34.786: INFO: Pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047727429s STEP: Saw pod success Apr 23 00:34:34.786: INFO: Pod "pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56" satisfied condition "Succeeded or Failed" Apr 23 00:34:34.789: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56 container configmap-volume-test: STEP: delete the pod Apr 23 00:34:34.830: INFO: Waiting for pod pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56 to disappear Apr 23 00:34:34.841: INFO: Pod pod-configmaps-50d58be4-2957-4047-badb-5f163dbefb56 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:34.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7464" for this suite. • [SLOW TEST:6.368 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3539,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:34.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 23 00:34:34.940: INFO: Waiting up to 5m0s for pod "pod-2a642c4f-8749-44e6-8ba9-9927e274353d" in namespace "emptydir-5286" to be "Succeeded or Failed" Apr 23 00:34:34.943: INFO: Pod "pod-2a642c4f-8749-44e6-8ba9-9927e274353d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232628ms Apr 23 00:34:36.947: INFO: Pod "pod-2a642c4f-8749-44e6-8ba9-9927e274353d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006874006s Apr 23 00:34:38.950: INFO: Pod "pod-2a642c4f-8749-44e6-8ba9-9927e274353d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010614397s STEP: Saw pod success Apr 23 00:34:38.950: INFO: Pod "pod-2a642c4f-8749-44e6-8ba9-9927e274353d" satisfied condition "Succeeded or Failed" Apr 23 00:34:38.953: INFO: Trying to get logs from node latest-worker2 pod pod-2a642c4f-8749-44e6-8ba9-9927e274353d container test-container: STEP: delete the pod Apr 23 00:34:38.987: INFO: Waiting for pod pod-2a642c4f-8749-44e6-8ba9-9927e274353d to disappear Apr 23 00:34:38.991: INFO: Pod pod-2a642c4f-8749-44e6-8ba9-9927e274353d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:38.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5286" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3539,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:38.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:34:39.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:34:41.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198879, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198879, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198879, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198879, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:34:44.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:34:54.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3208" for this suite. STEP: Destroying namespace "webhook-3208-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.906 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":207,"skipped":3539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:34:54.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 23 00:34:55.651: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 23 00:34:57.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198895, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198895, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198895, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198895, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:35:00.689: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:35:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:01.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5815" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.065 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":208,"skipped":3573,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:01.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8238/configmap-test-c5a98e5f-bdaf-46df-bc0b-c0d758366ee7 STEP: Creating a pod to test consume configMaps Apr 23 00:35:02.047: INFO: Waiting up to 5m0s for pod "pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58" in namespace "configmap-8238" to be "Succeeded or Failed" Apr 23 00:35:02.052: INFO: Pod "pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131347ms Apr 23 00:35:04.058: INFO: Pod "pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010352925s Apr 23 00:35:06.062: INFO: Pod "pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014630721s STEP: Saw pod success Apr 23 00:35:06.062: INFO: Pod "pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58" satisfied condition "Succeeded or Failed" Apr 23 00:35:06.065: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58 container env-test: STEP: delete the pod Apr 23 00:35:06.097: INFO: Waiting for pod pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58 to disappear Apr 23 00:35:06.112: INFO: Pod pod-configmaps-c22ce8b5-e487-414a-ad2e-fbcc1222ea58 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:06.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8238" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3580,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:06.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 23 00:35:06.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-8631 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 23 00:35:06.336: INFO: stderr: "" Apr 23 00:35:06.336: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 23 00:35:06.336: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 23 00:35:06.336: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8631" to be "running and ready, or succeeded" Apr 23 00:35:06.357: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.594373ms Apr 23 00:35:08.361: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024622841s Apr 23 00:35:10.365: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.028754793s Apr 23 00:35:10.365: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 23 00:35:10.365: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 23 00:35:10.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631' Apr 23 00:35:10.474: INFO: stderr: "" Apr 23 00:35:10.474: INFO: stdout: "I0423 00:35:08.433440 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/vq2z 469\nI0423 00:35:08.633584 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/2xhd 476\nI0423 00:35:08.833705 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/mgp 250\nI0423 00:35:09.033607 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/7qp 361\nI0423 00:35:09.233683 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/m7x 383\nI0423 00:35:09.433681 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/85z 598\nI0423 00:35:09.633640 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/sqrv 286\nI0423 00:35:09.833603 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/lhw 359\nI0423 00:35:10.033675 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j6cp 317\nI0423 00:35:10.233630 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/97sk 342\nI0423 00:35:10.433610 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tbm 478\n" STEP: limiting log lines Apr 23 00:35:10.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631 --tail=1' Apr 23 00:35:10.583: INFO: stderr: "" Apr 23 00:35:10.583: INFO: stdout: "I0423 00:35:10.433610 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tbm 478\n" Apr 23 00:35:10.583: INFO: got output "I0423 00:35:10.433610 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tbm 478\n" STEP: limiting log bytes Apr 23 00:35:10.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631 --limit-bytes=1' Apr 23 00:35:10.689: INFO: stderr: "" Apr 23 00:35:10.690: INFO: stdout: "I" Apr 23 00:35:10.690: INFO: got output "I" STEP: exposing timestamps Apr 23 00:35:10.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631 --tail=1 --timestamps' Apr 23 00:35:10.818: INFO: stderr: "" Apr 23 00:35:10.818: INFO: stdout: "2020-04-23T00:35:10.633741665Z I0423 00:35:10.633595 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/46l 249\n" Apr 23 00:35:10.818: INFO: got output "2020-04-23T00:35:10.633741665Z I0423 00:35:10.633595 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/46l 249\n" STEP: restricting to a time range Apr 23 00:35:13.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631 --since=1s' Apr 23 00:35:13.425: INFO: stderr: "" Apr 23 00:35:13.425: INFO: stdout: "I0423 00:35:12.433644 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/7vk5 402\nI0423 00:35:12.633658 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/8x48 344\nI0423 00:35:12.833579 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/p2x 402\nI0423 00:35:13.033625 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/mxt 335\nI0423 00:35:13.233616 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/ccf 332\n" Apr 23 00:35:13.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8631 --since=24h' Apr 23 00:35:13.545: INFO: stderr: "" Apr 23 00:35:13.545: INFO: stdout: "I0423 00:35:08.433440 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/vq2z 469\nI0423 00:35:08.633584 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/2xhd 476\nI0423 00:35:08.833705 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/mgp 250\nI0423 00:35:09.033607 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/7qp 361\nI0423 00:35:09.233683 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/m7x 383\nI0423 00:35:09.433681 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/85z 598\nI0423 00:35:09.633640 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/sqrv 286\nI0423 00:35:09.833603 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/lhw 359\nI0423 00:35:10.033675 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j6cp 317\nI0423 00:35:10.233630 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/97sk 342\nI0423 00:35:10.433610 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tbm 478\nI0423 00:35:10.633595 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/46l 249\nI0423 00:35:10.833617 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/d8d2 235\nI0423 00:35:11.033637 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/7kst 263\nI0423 00:35:11.233566 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/9ct 275\nI0423 00:35:11.433630 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/bqx 436\nI0423 00:35:11.633618 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/mmv 365\nI0423 00:35:11.833630 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tcz 403\nI0423 00:35:12.033624 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/rhnq 438\nI0423 00:35:12.233593 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/5dz 466\nI0423 00:35:12.433644 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/7vk5 402\nI0423 00:35:12.633658 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/8x48 344\nI0423 00:35:12.833579 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/p2x 402\nI0423 00:35:13.033625 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/mxt 335\nI0423 00:35:13.233616 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/ccf 332\nI0423 00:35:13.433607 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/gbp 225\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 23 00:35:13.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8631' Apr 23 00:35:22.769: INFO: stderr: "" Apr 23 00:35:22.769: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:22.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8631" for this suite. • [SLOW TEST:16.596 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":210,"skipped":3584,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:22.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:29.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8761" for this suite. STEP: Destroying namespace "nsdeletetest-99" for this suite. Apr 23 00:35:29.103: INFO: Namespace nsdeletetest-99 was already deleted STEP: Destroying namespace "nsdeletetest-4298" for this suite. • [SLOW TEST:6.338 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":211,"skipped":3604,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:29.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:35:29.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:35:31.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198929, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198929, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198929, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723198929, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:35:34.608: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:35.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5489" for this suite. STEP: Destroying namespace "webhook-5489-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.054 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":212,"skipped":3622,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:35.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 23 00:35:39.829: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8258 pod-service-account-6e4a2f48-c028-4bc7-acc4-da205a711b65 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 23 00:35:40.053: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8258 pod-service-account-6e4a2f48-c028-4bc7-acc4-da205a711b65 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 23 00:35:40.230: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8258 pod-service-account-6e4a2f48-c028-4bc7-acc4-da205a711b65 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:40.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8258" for this suite. • [SLOW TEST:5.277 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":213,"skipped":3622,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:40.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 23 00:35:40.501: INFO: Waiting up to 5m0s for pod "downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a" in namespace "downward-api-2694" to be "Succeeded or Failed" Apr 23 00:35:40.520: INFO: Pod "downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.202174ms Apr 23 00:35:42.525: INFO: Pod "downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938198s Apr 23 00:35:44.529: INFO: Pod "downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028149146s STEP: Saw pod success Apr 23 00:35:44.529: INFO: Pod "downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a" satisfied condition "Succeeded or Failed" Apr 23 00:35:44.532: INFO: Trying to get logs from node latest-worker pod downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a container dapi-container: STEP: delete the pod Apr 23 00:35:44.582: INFO: Waiting for pod downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a to disappear Apr 23 00:35:44.592: INFO: Pod downward-api-011d1b22-86f4-4949-b7f9-fb2fef2fe51a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:44.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2694" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3630,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:44.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:35:44.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775" in namespace "downward-api-1981" to be "Succeeded or Failed" Apr 23 00:35:44.694: INFO: Pod "downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519399ms Apr 23 00:35:46.715: INFO: Pod "downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024058726s Apr 23 00:35:48.720: INFO: Pod "downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028722566s STEP: Saw pod success Apr 23 00:35:48.720: INFO: Pod "downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775" satisfied condition "Succeeded or Failed" Apr 23 00:35:48.723: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775 container client-container: STEP: delete the pod Apr 23 00:35:48.743: INFO: Waiting for pod downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775 to disappear Apr 23 00:35:48.748: INFO: Pod downwardapi-volume-457689ba-783f-4c08-8ec2-4effee9d9775 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:35:48.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1981" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3638,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:35:48.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:35:48.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 23 00:35:49.385: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:49Z generation:1 name:name1 resourceVersion:10265978 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b552ce60-5e98-4804-8307-3f2beb0d19d3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 23 00:35:59.391: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:59Z generation:1 name:name2 resourceVersion:10266025 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ab80fcfe-f13d-439b-853f-c13ef0f3558b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 23 00:36:09.397: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:49Z generation:2 name:name1 resourceVersion:10266055 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b552ce60-5e98-4804-8307-3f2beb0d19d3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 23 00:36:19.403: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:59Z generation:2 name:name2 resourceVersion:10266085 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ab80fcfe-f13d-439b-853f-c13ef0f3558b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 23 00:36:29.419: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:49Z generation:2 name:name1 resourceVersion:10266115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b552ce60-5e98-4804-8307-3f2beb0d19d3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 23 00:36:39.426: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-23T00:35:59Z generation:2 name:name2 resourceVersion:10266145 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ab80fcfe-f13d-439b-853f-c13ef0f3558b] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:36:49.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5443" for this suite. • [SLOW TEST:61.189 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":216,"skipped":3640,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:36:49.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9827 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 23 00:36:50.036: INFO: Found 0 stateful pods, waiting for 3 Apr 23 00:37:00.041: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:37:00.041: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:37:00.041: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:37:00.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9827 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:37:00.324: INFO: stderr: "I0423 00:37:00.198238 2568 log.go:172] (0xc000a3d550) (0xc000aa66e0) Create stream\nI0423 00:37:00.198296 2568 log.go:172] (0xc000a3d550) (0xc000aa66e0) Stream added, broadcasting: 1\nI0423 00:37:00.203967 2568 log.go:172] (0xc000a3d550) Reply frame received for 1\nI0423 00:37:00.204007 2568 log.go:172] (0xc000a3d550) (0xc000681680) Create stream\nI0423 00:37:00.204016 2568 log.go:172] (0xc000a3d550) (0xc000681680) Stream added, broadcasting: 3\nI0423 00:37:00.205332 2568 log.go:172] (0xc000a3d550) Reply frame received for 3\nI0423 00:37:00.205367 2568 log.go:172] (0xc000a3d550) (0xc0004f8aa0) Create stream\nI0423 00:37:00.205379 2568 log.go:172] (0xc000a3d550) (0xc0004f8aa0) Stream added, broadcasting: 5\nI0423 00:37:00.206478 2568 log.go:172] (0xc000a3d550) Reply frame received for 5\nI0423 00:37:00.293644 2568 log.go:172] (0xc000a3d550) Data frame received for 5\nI0423 00:37:00.293689 2568 log.go:172] (0xc0004f8aa0) (5) Data frame handling\nI0423 00:37:00.293732 2568 log.go:172] (0xc0004f8aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:37:00.317497 2568 log.go:172] (0xc000a3d550) Data frame received for 3\nI0423 00:37:00.317526 2568 log.go:172] (0xc000681680) (3) Data frame handling\nI0423 00:37:00.317544 2568 log.go:172] (0xc000681680) (3) Data frame sent\nI0423 00:37:00.317682 2568 log.go:172] (0xc000a3d550) Data frame received for 3\nI0423 00:37:00.317745 2568 log.go:172] (0xc000681680) (3) Data frame handling\nI0423 00:37:00.317764 2568 log.go:172] (0xc000a3d550) Data frame received for 5\nI0423 00:37:00.317769 2568 log.go:172] (0xc0004f8aa0) (5) Data frame handling\nI0423 00:37:00.319585 2568 log.go:172] (0xc000a3d550) Data frame received for 1\nI0423 00:37:00.319612 2568 log.go:172] (0xc000aa66e0) (1) Data frame handling\nI0423 00:37:00.319630 2568 log.go:172] (0xc000aa66e0) (1) Data frame sent\nI0423 00:37:00.319641 2568 log.go:172] (0xc000a3d550) (0xc000aa66e0) Stream removed, broadcasting: 1\nI0423 00:37:00.319651 2568 log.go:172] (0xc000a3d550) Go away received\nI0423 00:37:00.319987 2568 log.go:172] (0xc000a3d550) (0xc000aa66e0) Stream removed, broadcasting: 1\nI0423 00:37:00.320009 2568 log.go:172] (0xc000a3d550) (0xc000681680) Stream removed, broadcasting: 3\nI0423 00:37:00.320027 2568 log.go:172] (0xc000a3d550) (0xc0004f8aa0) Stream removed, broadcasting: 5\n" Apr 23 00:37:00.325: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:37:00.325: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 23 00:37:10.357: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 23 00:37:20.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9827 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:37:20.597: INFO: stderr: "I0423 00:37:20.509810 2590 log.go:172] (0xc000b34000) (0xc0007df2c0) Create stream\nI0423 00:37:20.509879 2590 log.go:172] (0xc000b34000) (0xc0007df2c0) Stream added, broadcasting: 1\nI0423 00:37:20.512948 2590 log.go:172] (0xc000b34000) Reply frame received for 1\nI0423 00:37:20.512991 2590 log.go:172] (0xc000b34000) (0xc0007df4a0) Create stream\nI0423 00:37:20.513005 2590 log.go:172] (0xc000b34000) (0xc0007df4a0) Stream added, broadcasting: 3\nI0423 00:37:20.514155 2590 log.go:172] (0xc000b34000) Reply frame received for 3\nI0423 00:37:20.514211 2590 log.go:172] (0xc000b34000) (0xc00093a000) Create stream\nI0423 00:37:20.514229 2590 log.go:172] (0xc000b34000) (0xc00093a000) Stream added, broadcasting: 5\nI0423 00:37:20.515247 2590 log.go:172] (0xc000b34000) Reply frame received for 5\nI0423 00:37:20.590634 2590 log.go:172] (0xc000b34000) Data frame received for 5\nI0423 00:37:20.590682 2590 log.go:172] (0xc00093a000) (5) Data frame handling\nI0423 00:37:20.590699 2590 log.go:172] (0xc00093a000) (5) Data frame sent\nI0423 00:37:20.590711 2590 log.go:172] (0xc000b34000) Data frame received for 5\nI0423 00:37:20.590720 2590 log.go:172] (0xc00093a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:37:20.590750 2590 log.go:172] (0xc000b34000) Data frame received for 3\nI0423 00:37:20.590778 2590 log.go:172] (0xc0007df4a0) (3) Data frame handling\nI0423 00:37:20.590792 2590 log.go:172] (0xc0007df4a0) (3) Data frame sent\nI0423 00:37:20.590803 2590 log.go:172] (0xc000b34000) Data frame received for 3\nI0423 00:37:20.590809 2590 log.go:172] (0xc0007df4a0) (3) Data frame handling\nI0423 00:37:20.592228 2590 log.go:172] (0xc000b34000) Data frame received for 1\nI0423 00:37:20.592250 2590 log.go:172] (0xc0007df2c0) (1) Data frame handling\nI0423 00:37:20.592266 2590 log.go:172] (0xc0007df2c0) (1) Data frame sent\nI0423 00:37:20.592286 2590 log.go:172] (0xc000b34000) (0xc0007df2c0) Stream removed, broadcasting: 1\nI0423 00:37:20.592311 2590 log.go:172] (0xc000b34000) Go away received\nI0423 00:37:20.592747 2590 log.go:172] (0xc000b34000) (0xc0007df2c0) Stream removed, broadcasting: 1\nI0423 00:37:20.592768 2590 log.go:172] (0xc000b34000) (0xc0007df4a0) Stream removed, broadcasting: 3\nI0423 00:37:20.592778 2590 log.go:172] (0xc000b34000) (0xc00093a000) Stream removed, broadcasting: 5\n" Apr 23 00:37:20.597: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:37:20.597: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Apr 23 00:37:40.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9827 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:37:40.888: INFO: stderr: "I0423 00:37:40.749353 2612 log.go:172] (0xc00003a420) (0xc00083b220) Create stream\nI0423 00:37:40.749412 2612 log.go:172] (0xc00003a420) (0xc00083b220) Stream added, broadcasting: 1\nI0423 00:37:40.751155 2612 log.go:172] (0xc00003a420) Reply frame received for 1\nI0423 00:37:40.751233 2612 log.go:172] (0xc00003a420) (0xc00092e000) Create stream\nI0423 00:37:40.751272 2612 log.go:172] (0xc00003a420) (0xc00092e000) Stream added, broadcasting: 3\nI0423 00:37:40.752279 2612 log.go:172] (0xc00003a420) Reply frame received for 3\nI0423 00:37:40.752320 2612 log.go:172] (0xc00003a420) (0xc000a4a000) Create stream\nI0423 00:37:40.752347 2612 log.go:172] (0xc00003a420) (0xc000a4a000) Stream added, broadcasting: 5\nI0423 00:37:40.753252 2612 log.go:172] (0xc00003a420) Reply frame received for 5\nI0423 00:37:40.843661 2612 log.go:172] (0xc00003a420) Data frame received for 5\nI0423 00:37:40.843689 2612 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0423 00:37:40.843702 2612 log.go:172] (0xc000a4a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:37:40.879797 2612 log.go:172] (0xc00003a420) Data frame received for 3\nI0423 00:37:40.879848 2612 log.go:172] (0xc00092e000) (3) Data frame handling\nI0423 00:37:40.879864 2612 log.go:172] (0xc00092e000) (3) Data frame sent\nI0423 00:37:40.879915 2612 log.go:172] (0xc00003a420) Data frame received for 5\nI0423 00:37:40.879928 2612 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0423 00:37:40.880390 2612 log.go:172] (0xc00003a420) Data frame received for 3\nI0423 00:37:40.880407 2612 log.go:172] (0xc00092e000) (3) Data frame handling\nI0423 00:37:40.882782 2612 log.go:172] (0xc00003a420) Data frame received for 1\nI0423 00:37:40.882846 2612 log.go:172] (0xc00083b220) (1) Data frame handling\nI0423 00:37:40.882872 2612 log.go:172] (0xc00083b220) (1) Data frame sent\nI0423 00:37:40.882933 2612 log.go:172] (0xc00003a420) (0xc00083b220) Stream removed, broadcasting: 1\nI0423 00:37:40.882952 2612 log.go:172] (0xc00003a420) Go away received\nI0423 00:37:40.883362 2612 log.go:172] (0xc00003a420) (0xc00083b220) Stream removed, broadcasting: 1\nI0423 00:37:40.883386 2612 log.go:172] (0xc00003a420) (0xc00092e000) Stream removed, broadcasting: 3\nI0423 00:37:40.883400 2612 log.go:172] (0xc00003a420) (0xc000a4a000) Stream removed, broadcasting: 5\n" Apr 23 00:37:40.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:37:40.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:37:50.920: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 23 00:38:00.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9827 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:38:01.174: INFO: stderr: "I0423 00:38:01.080704 2633 log.go:172] (0xc000b1e000) (0xc0006f57c0) Create stream\nI0423 00:38:01.080758 2633 log.go:172] (0xc000b1e000) (0xc0006f57c0) Stream added, broadcasting: 1\nI0423 00:38:01.083348 2633 log.go:172] (0xc000b1e000) Reply frame received for 1\nI0423 00:38:01.083404 2633 log.go:172] (0xc000b1e000) (0xc0004febe0) Create stream\nI0423 00:38:01.083436 2633 log.go:172] (0xc000b1e000) (0xc0004febe0) Stream added, broadcasting: 3\nI0423 00:38:01.084448 2633 log.go:172] (0xc000b1e000) Reply frame received for 3\nI0423 00:38:01.084481 2633 log.go:172] (0xc000b1e000) (0xc0006f5860) Create stream\nI0423 00:38:01.084497 2633 log.go:172] (0xc000b1e000) (0xc0006f5860) Stream added, broadcasting: 5\nI0423 00:38:01.085730 2633 log.go:172] (0xc000b1e000) Reply frame received for 5\nI0423 00:38:01.165963 2633 log.go:172] (0xc000b1e000) Data frame received for 5\nI0423 00:38:01.165993 2633 log.go:172] (0xc0006f5860) (5) Data frame handling\nI0423 00:38:01.166007 2633 log.go:172] (0xc0006f5860) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:38:01.166040 2633 log.go:172] (0xc000b1e000) Data frame received for 3\nI0423 00:38:01.166075 2633 log.go:172] (0xc0004febe0) (3) Data frame handling\nI0423 00:38:01.166109 2633 log.go:172] (0xc000b1e000) Data frame received for 5\nI0423 00:38:01.166142 2633 log.go:172] (0xc0004febe0) (3) Data frame sent\nI0423 00:38:01.166175 2633 log.go:172] (0xc000b1e000) Data frame received for 3\nI0423 00:38:01.166198 2633 log.go:172] (0xc0004febe0) (3) Data frame handling\nI0423 00:38:01.166369 2633 log.go:172] (0xc0006f5860) (5) Data frame handling\nI0423 00:38:01.168014 2633 log.go:172] (0xc000b1e000) Data frame received for 1\nI0423 00:38:01.168047 2633 log.go:172] (0xc0006f57c0) (1) Data frame handling\nI0423 00:38:01.168067 2633 log.go:172] (0xc0006f57c0) (1) Data frame sent\nI0423 00:38:01.168087 2633 log.go:172] (0xc000b1e000) (0xc0006f57c0) Stream removed, broadcasting: 1\nI0423 00:38:01.168110 2633 log.go:172] (0xc000b1e000) Go away received\nI0423 00:38:01.168576 2633 log.go:172] (0xc000b1e000) (0xc0006f57c0) Stream removed, broadcasting: 1\nI0423 00:38:01.168600 2633 log.go:172] (0xc000b1e000) (0xc0004febe0) Stream removed, broadcasting: 3\nI0423 00:38:01.168611 2633 log.go:172] (0xc000b1e000) (0xc0006f5860) Stream removed, broadcasting: 5\n" Apr 23 00:38:01.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:38:01.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:38:11.195: INFO: Waiting for StatefulSet statefulset-9827/ss2 to complete update Apr 23 00:38:11.195: INFO: Waiting for Pod statefulset-9827/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 23 00:38:11.195: INFO: Waiting for Pod statefulset-9827/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 23 00:38:11.195: INFO: Waiting for Pod statefulset-9827/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 23 00:38:21.211: INFO: Waiting for StatefulSet statefulset-9827/ss2 to complete update Apr 23 00:38:21.211: INFO: Waiting for Pod statefulset-9827/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 23 00:38:31.204: INFO: Waiting for StatefulSet statefulset-9827/ss2 to complete update Apr 23 00:38:31.204: INFO: Waiting for Pod statefulset-9827/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 23 00:38:41.204: INFO: Deleting all statefulset in ns statefulset-9827 Apr 23 00:38:41.207: INFO: Scaling statefulset ss2 to 0 Apr 23 00:39:21.225: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:39:21.229: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:39:21.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9827" for this suite. • [SLOW TEST:151.303 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":217,"skipped":3643,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:39:21.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-8d3e66f5-5736-42d3-a198-5edc6fc1a521 STEP: Creating a pod to test consume secrets Apr 23 00:39:21.394: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe" in namespace "projected-5683" to be "Succeeded or Failed" Apr 23 00:39:21.407: INFO: Pod "pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.445019ms Apr 23 00:39:23.419: INFO: Pod "pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025285048s Apr 23 00:39:25.423: INFO: Pod "pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029379103s STEP: Saw pod success Apr 23 00:39:25.423: INFO: Pod "pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe" satisfied condition "Succeeded or Failed" Apr 23 00:39:25.426: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe container projected-secret-volume-test: STEP: delete the pod Apr 23 00:39:25.468: INFO: Waiting for pod pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe to disappear Apr 23 00:39:25.473: INFO: Pod pod-projected-secrets-3844ae24-182b-42a1-9fc3-67d59db8f2fe no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:39:25.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5683" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:39:25.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:39:25.535: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:39:29.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9144" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3680,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:39:29.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2000 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 00:39:29.839: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 23 00:39:29.931: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:39:31.940: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:39:33.935: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:39:35.935: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:39:37.936: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:39:39.936: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:39:41.935: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:39:43.936: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 23 00:39:45.935: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 23 00:39:45.942: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 23 00:39:47.946: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 23 00:39:49.946: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 23 00:39:53.968: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.6:8080/dial?request=hostname&protocol=udp&host=10.244.2.5&port=8081&tries=1'] Namespace:pod-network-test-2000 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:39:53.968: INFO: >>> kubeConfig: /root/.kube/config I0423 00:39:54.006159 7 log.go:172] (0xc00277a840) (0xc00118e820) Create stream I0423 00:39:54.006191 7 log.go:172] (0xc00277a840) (0xc00118e820) Stream added, broadcasting: 1 I0423 00:39:54.007755 7 log.go:172] (0xc00277a840) Reply frame received for 1 I0423 00:39:54.007783 7 log.go:172] (0xc00277a840) (0xc002e20460) Create stream I0423 00:39:54.007793 7 log.go:172] (0xc00277a840) (0xc002e20460) Stream added, broadcasting: 3 I0423 00:39:54.008631 7 log.go:172] (0xc00277a840) Reply frame received for 3 I0423 00:39:54.008669 7 log.go:172] (0xc00277a840) (0xc002e205a0) Create stream I0423 00:39:54.008685 7 log.go:172] (0xc00277a840) (0xc002e205a0) Stream added, broadcasting: 5 I0423 00:39:54.009770 7 log.go:172] (0xc00277a840) Reply frame received for 5 I0423 00:39:54.118542 7 log.go:172] (0xc00277a840) Data frame received for 3 I0423 00:39:54.118590 7 log.go:172] (0xc002e20460) (3) Data frame handling I0423 00:39:54.118625 7 log.go:172] (0xc002e20460) (3) Data frame sent I0423 00:39:54.119449 7 log.go:172] (0xc00277a840) Data frame received for 3 I0423 00:39:54.119484 7 log.go:172] (0xc002e20460) (3) Data frame handling I0423 00:39:54.119516 7 log.go:172] (0xc00277a840) Data frame received for 5 I0423 00:39:54.119548 7 log.go:172] (0xc002e205a0) (5) Data frame handling I0423 00:39:54.123953 7 log.go:172] (0xc00277a840) Data frame received for 1 I0423 00:39:54.123987 7 log.go:172] (0xc00118e820) (1) Data frame handling I0423 00:39:54.124007 7 log.go:172] (0xc00118e820) (1) Data frame sent I0423 00:39:54.124042 7 log.go:172] (0xc00277a840) (0xc00118e820) Stream removed, broadcasting: 1 I0423 00:39:54.124079 7 log.go:172] (0xc00277a840) Go away received I0423 00:39:54.124188 7 log.go:172] (0xc00277a840) (0xc00118e820) Stream removed, broadcasting: 1 I0423 00:39:54.124217 7 log.go:172] (0xc00277a840) (0xc002e20460) Stream removed, broadcasting: 3 I0423 00:39:54.124234 7 log.go:172] (0xc00277a840) (0xc002e205a0) Stream removed, broadcasting: 5 Apr 23 00:39:54.124: INFO: Waiting for responses: map[] Apr 23 00:39:54.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.6:8080/dial?request=hostname&protocol=udp&host=10.244.1.85&port=8081&tries=1'] Namespace:pod-network-test-2000 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:39:54.128: INFO: >>> kubeConfig: /root/.kube/config I0423 00:39:54.157480 7 log.go:172] (0xc002d50420) (0xc002e20be0) Create stream I0423 00:39:54.157516 7 log.go:172] (0xc002d50420) (0xc002e20be0) Stream added, broadcasting: 1 I0423 00:39:54.159519 7 log.go:172] (0xc002d50420) Reply frame received for 1 I0423 00:39:54.159581 7 log.go:172] (0xc002d50420) (0xc0025c60a0) Create stream I0423 00:39:54.159609 7 log.go:172] (0xc002d50420) (0xc0025c60a0) Stream added, broadcasting: 3 I0423 00:39:54.160763 7 log.go:172] (0xc002d50420) Reply frame received for 3 I0423 00:39:54.160801 7 log.go:172] (0xc002d50420) (0xc00118e8c0) Create stream I0423 00:39:54.160814 7 log.go:172] (0xc002d50420) (0xc00118e8c0) Stream added, broadcasting: 5 I0423 00:39:54.162014 7 log.go:172] (0xc002d50420) Reply frame received for 5 I0423 00:39:54.236699 7 log.go:172] (0xc002d50420) Data frame received for 3 I0423 00:39:54.236731 7 log.go:172] (0xc0025c60a0) (3) Data frame handling I0423 00:39:54.236754 7 log.go:172] (0xc0025c60a0) (3) Data frame sent I0423 00:39:54.237445 7 log.go:172] (0xc002d50420) Data frame received for 5 I0423 00:39:54.237468 7 log.go:172] (0xc00118e8c0) (5) Data frame handling I0423 00:39:54.237606 7 log.go:172] (0xc002d50420) Data frame received for 3 I0423 00:39:54.237626 7 log.go:172] (0xc0025c60a0) (3) Data frame handling I0423 00:39:54.239050 7 log.go:172] (0xc002d50420) Data frame received for 1 I0423 00:39:54.239098 7 log.go:172] (0xc002e20be0) (1) Data frame handling I0423 00:39:54.239123 7 log.go:172] (0xc002e20be0) (1) Data frame sent I0423 00:39:54.239135 7 log.go:172] (0xc002d50420) (0xc002e20be0) Stream removed, broadcasting: 1 I0423 00:39:54.239190 7 log.go:172] (0xc002d50420) (0xc002e20be0) Stream removed, broadcasting: 1 I0423 00:39:54.239216 7 log.go:172] (0xc002d50420) (0xc0025c60a0) Stream removed, broadcasting: 3 I0423 00:39:54.239232 7 log.go:172] (0xc002d50420) (0xc00118e8c0) Stream removed, broadcasting: 5 Apr 23 00:39:54.239: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:39:54.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0423 00:39:54.239341 7 log.go:172] (0xc002d50420) Go away received STEP: Destroying namespace "pod-network-test-2000" for this suite. • [SLOW TEST:24.547 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3682,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:39:54.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:40:54.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-217" for this suite. • [SLOW TEST:60.072 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:40:54.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-07397117-def1-4e27-9c6c-6473f0e0cea1 STEP: Creating a pod to test consume configMaps Apr 23 00:40:54.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0" in namespace "configmap-2376" to be "Succeeded or Failed" Apr 23 00:40:54.492: INFO: Pod "pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.456431ms Apr 23 00:40:56.495: INFO: Pod "pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023103655s Apr 23 00:40:58.500: INFO: Pod "pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027452348s STEP: Saw pod success Apr 23 00:40:58.500: INFO: Pod "pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0" satisfied condition "Succeeded or Failed" Apr 23 00:40:58.503: INFO: Trying to get logs from node latest-worker pod pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0 container configmap-volume-test: STEP: delete the pod Apr 23 00:40:58.553: INFO: Waiting for pod pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0 to disappear Apr 23 00:40:58.563: INFO: Pod pod-configmaps-13b2310d-1878-499c-8d4f-7b3381ab22e0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:40:58.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2376" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3715,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:40:58.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:40:58.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2573' Apr 23 00:40:58.957: INFO: stderr: "" Apr 23 00:40:58.957: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 23 00:40:58.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2573' Apr 23 00:40:59.524: INFO: stderr: "" Apr 23 00:40:59.524: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 23 00:41:00.774: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:41:00.774: INFO: Found 0 / 1 Apr 23 00:41:01.534: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:41:01.534: INFO: Found 0 / 1 Apr 23 00:41:02.529: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:41:02.529: INFO: Found 0 / 1 Apr 23 00:41:03.528: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:41:03.528: INFO: Found 1 / 1 Apr 23 00:41:03.528: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 23 00:41:03.531: INFO: Selector matched 1 pods for map[app:agnhost] Apr 23 00:41:03.531: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 23 00:41:03.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-56d5k --namespace=kubectl-2573' Apr 23 00:41:03.678: INFO: stderr: "" Apr 23 00:41:03.678: INFO: stdout: "Name: agnhost-master-56d5k\nNamespace: kubectl-2573\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Thu, 23 Apr 2020 00:40:58 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.87\nIPs:\n IP: 10.244.1.87\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://fd93de68f3e198abbb19c86c8df9864451ea4eccaf561489341dfb3d45b5bae9\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Apr 2020 00:41:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-2l282 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-2l282:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-2l282\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2573/agnhost-master-56d5k to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" Apr 23 00:41:03.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2573' Apr 23 00:41:03.791: INFO: stderr: "" Apr 23 00:41:03.791: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2573\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-56d5k\n" Apr 23 00:41:03.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2573' Apr 23 00:41:03.889: INFO: stderr: "" Apr 23 00:41:03.889: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2573\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.62.200\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.87:6379\nSession Affinity: None\nEvents: \n" Apr 23 00:41:03.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 23 00:41:04.037: INFO: stderr: "" Apr 23 00:41:04.037: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 23 Apr 2020 00:40:59 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 23 Apr 2020 00:37:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Apr 2020 00:37:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Apr 2020 00:37:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Apr 2020 00:37:22 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 38d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 23 00:41:04.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-2573' Apr 23 00:41:04.142: INFO: stderr: "" Apr 23 00:41:04.142: INFO: stdout: "Name: kubectl-2573\nLabels: e2e-framework=kubectl\n e2e-run=121f9a33-06cb-45d7-aa8f-f9a1efa75554\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:04.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2573" for this suite. • [SLOW TEST:5.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":223,"skipped":3735,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:04.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:41:04.270: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7af29a69-af73-43df-9737-e2171fbb9512", Controller:(*bool)(0xc00315f2c2), BlockOwnerDeletion:(*bool)(0xc00315f2c3)}} Apr 23 00:41:04.283: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3f6fdb87-0d0b-4be3-a233-ffe83958e7c0", Controller:(*bool)(0xc00224cf6a), BlockOwnerDeletion:(*bool)(0xc00224cf6b)}} Apr 23 00:41:04.337: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"cd65d9ed-7441-41d0-8228-5417d594c4a7", Controller:(*bool)(0xc00224d3ea), BlockOwnerDeletion:(*bool)(0xc00224d3eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:09.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3210" for this suite. • [SLOW TEST:5.243 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":224,"skipped":3741,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:09.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:41:10.313: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:41:12.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723199270, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723199270, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723199270, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723199270, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:41:15.377: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:41:15.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4492-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4690" for this suite. STEP: Destroying namespace "webhook-4690-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.228 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":225,"skipped":3744,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:16.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 00:41:20.698: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2272" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:20.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:41:20.806: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 23 00:41:22.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-306 create -f -' Apr 23 00:41:25.343: INFO: stderr: "" Apr 23 00:41:25.343: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 23 00:41:25.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-306 delete e2e-test-crd-publish-openapi-4522-crds test-cr' Apr 23 00:41:25.456: INFO: stderr: "" Apr 23 00:41:25.456: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 23 00:41:25.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-306 apply -f -' Apr 23 00:41:25.700: INFO: stderr: "" Apr 23 00:41:25.700: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 23 00:41:25.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-306 delete e2e-test-crd-publish-openapi-4522-crds test-cr' Apr 23 00:41:25.802: INFO: stderr: "" Apr 23 00:41:25.802: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 23 00:41:25.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4522-crds' Apr 23 00:41:26.049: INFO: stderr: "" Apr 23 00:41:26.049: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4522-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-306" for this suite. • [SLOW TEST:8.183 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":227,"skipped":3781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:28.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:41:29.016: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 23 00:41:29.026: INFO: Number of nodes with available pods: 0 Apr 23 00:41:29.026: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 23 00:41:29.108: INFO: Number of nodes with available pods: 0 Apr 23 00:41:29.108: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:30.112: INFO: Number of nodes with available pods: 0 Apr 23 00:41:30.112: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:31.176: INFO: Number of nodes with available pods: 0 Apr 23 00:41:31.176: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:32.112: INFO: Number of nodes with available pods: 0 Apr 23 00:41:32.113: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:33.113: INFO: Number of nodes with available pods: 1 Apr 23 00:41:33.113: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 23 00:41:33.143: INFO: Number of nodes with available pods: 1 Apr 23 00:41:33.143: INFO: Number of running nodes: 0, number of available pods: 1 Apr 23 00:41:34.148: INFO: Number of nodes with available pods: 0 Apr 23 00:41:34.148: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 23 00:41:34.158: INFO: Number of nodes with available pods: 0 Apr 23 00:41:34.158: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:35.193: INFO: Number of nodes with available pods: 0 Apr 23 00:41:35.193: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:36.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:36.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:37.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:37.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:38.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:38.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:39.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:39.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:40.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:40.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:41.161: INFO: Number of nodes with available pods: 0 Apr 23 00:41:41.161: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:42.163: INFO: Number of nodes with available pods: 0 Apr 23 00:41:42.163: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:43.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:43.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:44.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:44.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:45.162: INFO: Number of nodes with available pods: 0 Apr 23 00:41:45.162: INFO: Node latest-worker is running more than one daemon pod Apr 23 00:41:46.162: INFO: Number of nodes with available pods: 1 Apr 23 00:41:46.162: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7358, will wait for the garbage collector to delete the pods Apr 23 00:41:46.224: INFO: Deleting DaemonSet.extensions daemon-set took: 5.67429ms Apr 23 00:41:46.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267391ms Apr 23 00:41:49.528: INFO: Number of nodes with available pods: 0 Apr 23 00:41:49.528: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 00:41:49.531: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7358/daemonsets","resourceVersion":"10267849"},"items":null} Apr 23 00:41:49.552: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7358/pods","resourceVersion":"10267849"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:41:49.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7358" for this suite. • [SLOW TEST:20.660 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":228,"skipped":3806,"failed":0} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:41:49.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:19.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4001" for this suite. • [SLOW TEST:30.380 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3806,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:19.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:20.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9767" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":230,"skipped":3817,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:20.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 23 00:42:24.833: INFO: Successfully updated pod "labelsupdate1c64b3bc-1738-4e1b-a06b-d3ee098a44ca" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:26.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3000" for this suite. • [SLOW TEST:6.763 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3828,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:26.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-0ce0d31c-899f-4fa8-814f-49e0f8878e94 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:30.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5089" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3841,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:31.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 23 00:42:31.078: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:43.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2049" for this suite. • [SLOW TEST:12.008 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3854,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:43.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 23 00:42:43.089: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:43.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-443" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":234,"skipped":3855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:43.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 23 00:42:43.334: INFO: Waiting up to 5m0s for pod "pod-ce8b4437-10c8-431a-ae73-e8e7c1185444" in namespace "emptydir-7804" to be "Succeeded or Failed" Apr 23 00:42:43.351: INFO: Pod "pod-ce8b4437-10c8-431a-ae73-e8e7c1185444": Phase="Pending", Reason="", readiness=false. Elapsed: 17.728132ms Apr 23 00:42:45.355: INFO: Pod "pod-ce8b4437-10c8-431a-ae73-e8e7c1185444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021253741s Apr 23 00:42:47.370: INFO: Pod "pod-ce8b4437-10c8-431a-ae73-e8e7c1185444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03581205s STEP: Saw pod success Apr 23 00:42:47.370: INFO: Pod "pod-ce8b4437-10c8-431a-ae73-e8e7c1185444" satisfied condition "Succeeded or Failed" Apr 23 00:42:47.372: INFO: Trying to get logs from node latest-worker pod pod-ce8b4437-10c8-431a-ae73-e8e7c1185444 container test-container: STEP: delete the pod Apr 23 00:42:47.403: INFO: Waiting for pod pod-ce8b4437-10c8-431a-ae73-e8e7c1185444 to disappear Apr 23 00:42:47.417: INFO: Pod pod-ce8b4437-10c8-431a-ae73-e8e7c1185444 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:47.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7804" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:47.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:42:47.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82" in namespace "projected-7084" to be "Succeeded or Failed" Apr 23 00:42:47.501: INFO: Pod "downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617319ms Apr 23 00:42:49.505: INFO: Pod "downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007622305s Apr 23 00:42:51.509: INFO: Pod "downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012044252s STEP: Saw pod success Apr 23 00:42:51.509: INFO: Pod "downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82" satisfied condition "Succeeded or Failed" Apr 23 00:42:51.512: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82 container client-container: STEP: delete the pod Apr 23 00:42:51.538: INFO: Waiting for pod downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82 to disappear Apr 23 00:42:51.549: INFO: Pod downwardapi-volume-42c029d5-6186-417d-825a-7307586b4e82 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:51.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7084" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3940,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:51.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 23 00:42:51.621: INFO: Waiting up to 5m0s for pod "pod-9138ad15-40c0-4ddf-9f6c-19310130f188" in namespace "emptydir-8743" to be "Succeeded or Failed" Apr 23 00:42:51.627: INFO: Pod "pod-9138ad15-40c0-4ddf-9f6c-19310130f188": Phase="Pending", Reason="", readiness=false. Elapsed: 5.536723ms Apr 23 00:42:53.663: INFO: Pod "pod-9138ad15-40c0-4ddf-9f6c-19310130f188": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042025062s Apr 23 00:42:55.667: INFO: Pod "pod-9138ad15-40c0-4ddf-9f6c-19310130f188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045651051s STEP: Saw pod success Apr 23 00:42:55.667: INFO: Pod "pod-9138ad15-40c0-4ddf-9f6c-19310130f188" satisfied condition "Succeeded or Failed" Apr 23 00:42:55.669: INFO: Trying to get logs from node latest-worker pod pod-9138ad15-40c0-4ddf-9f6c-19310130f188 container test-container: STEP: delete the pod Apr 23 00:42:55.691: INFO: Waiting for pod pod-9138ad15-40c0-4ddf-9f6c-19310130f188 to disappear Apr 23 00:42:55.693: INFO: Pod pod-9138ad15-40c0-4ddf-9f6c-19310130f188 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:42:55.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8743" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:42:55.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5007 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5007 I0423 00:42:55.831184 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5007, replica count: 2 I0423 00:42:58.881667 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:43:01.881931 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 00:43:01.881: INFO: Creating new exec pod Apr 23 00:43:06.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5007 execpodqp8sd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 23 00:43:07.161: INFO: stderr: "I0423 00:43:07.062867 2926 log.go:172] (0xc000a66d10) (0xc0008f45a0) Create stream\nI0423 00:43:07.062928 2926 log.go:172] (0xc000a66d10) (0xc0008f45a0) Stream added, broadcasting: 1\nI0423 00:43:07.065726 2926 log.go:172] (0xc000a66d10) Reply frame received for 1\nI0423 00:43:07.065794 2926 log.go:172] (0xc000a66d10) (0xc000c14280) Create stream\nI0423 00:43:07.065804 2926 log.go:172] (0xc000a66d10) (0xc000c14280) Stream added, broadcasting: 3\nI0423 00:43:07.066653 2926 log.go:172] (0xc000a66d10) Reply frame received for 3\nI0423 00:43:07.066697 2926 log.go:172] (0xc000a66d10) (0xc000ace500) Create stream\nI0423 00:43:07.066712 2926 log.go:172] (0xc000a66d10) (0xc000ace500) Stream added, broadcasting: 5\nI0423 00:43:07.067481 2926 log.go:172] (0xc000a66d10) Reply frame received for 5\nI0423 00:43:07.153275 2926 log.go:172] (0xc000a66d10) Data frame received for 5\nI0423 00:43:07.153328 2926 log.go:172] (0xc000ace500) (5) Data frame handling\nI0423 00:43:07.153366 2926 log.go:172] (0xc000ace500) (5) Data frame sent\nI0423 00:43:07.153399 2926 log.go:172] (0xc000a66d10) Data frame received for 5\nI0423 00:43:07.153416 2926 log.go:172] (0xc000ace500) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0423 00:43:07.153491 2926 log.go:172] (0xc000ace500) (5) Data frame sent\nI0423 00:43:07.153797 2926 log.go:172] (0xc000a66d10) Data frame received for 3\nI0423 00:43:07.153891 2926 log.go:172] (0xc000c14280) (3) Data frame handling\nI0423 00:43:07.153933 2926 log.go:172] (0xc000a66d10) Data frame received for 5\nI0423 00:43:07.153963 2926 log.go:172] (0xc000ace500) (5) Data frame handling\nI0423 00:43:07.155502 2926 log.go:172] (0xc000a66d10) Data frame received for 1\nI0423 00:43:07.155540 2926 log.go:172] (0xc0008f45a0) (1) Data frame handling\nI0423 00:43:07.155561 2926 log.go:172] (0xc0008f45a0) (1) Data frame sent\nI0423 00:43:07.155589 2926 log.go:172] (0xc000a66d10) (0xc0008f45a0) Stream removed, broadcasting: 1\nI0423 00:43:07.155631 2926 log.go:172] (0xc000a66d10) Go away received\nI0423 00:43:07.156099 2926 log.go:172] (0xc000a66d10) (0xc0008f45a0) Stream removed, broadcasting: 1\nI0423 00:43:07.156124 2926 log.go:172] (0xc000a66d10) (0xc000c14280) Stream removed, broadcasting: 3\nI0423 00:43:07.156135 2926 log.go:172] (0xc000a66d10) (0xc000ace500) Stream removed, broadcasting: 5\n" Apr 23 00:43:07.161: INFO: stdout: "" Apr 23 00:43:07.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5007 execpodqp8sd -- /bin/sh -x -c nc -zv -t -w 2 10.96.50.149 80' Apr 23 00:43:07.360: INFO: stderr: "I0423 00:43:07.279691 2946 log.go:172] (0xc0009d20b0) (0xc0006692c0) Create stream\nI0423 00:43:07.279760 2946 log.go:172] (0xc0009d20b0) (0xc0006692c0) Stream added, broadcasting: 1\nI0423 00:43:07.282904 2946 log.go:172] (0xc0009d20b0) Reply frame received for 1\nI0423 00:43:07.282965 2946 log.go:172] (0xc0009d20b0) (0xc000992000) Create stream\nI0423 00:43:07.282976 2946 log.go:172] (0xc0009d20b0) (0xc000992000) Stream added, broadcasting: 3\nI0423 00:43:07.283869 2946 log.go:172] (0xc0009d20b0) Reply frame received for 3\nI0423 00:43:07.284048 2946 log.go:172] (0xc0009d20b0) (0xc0006694a0) Create stream\nI0423 00:43:07.284068 2946 log.go:172] (0xc0009d20b0) (0xc0006694a0) Stream added, broadcasting: 5\nI0423 00:43:07.284967 2946 log.go:172] (0xc0009d20b0) Reply frame received for 5\nI0423 00:43:07.353689 2946 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0423 00:43:07.353743 2946 log.go:172] (0xc0009d20b0) Data frame received for 3\nI0423 00:43:07.353774 2946 log.go:172] (0xc000992000) (3) Data frame handling\nI0423 00:43:07.353797 2946 log.go:172] (0xc0006694a0) (5) Data frame handling\nI0423 00:43:07.353808 2946 log.go:172] (0xc0006694a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.50.149 80\nConnection to 10.96.50.149 80 port [tcp/http] succeeded!\nI0423 00:43:07.353917 2946 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0423 00:43:07.353935 2946 log.go:172] (0xc0006694a0) (5) Data frame handling\nI0423 00:43:07.355447 2946 log.go:172] (0xc0009d20b0) Data frame received for 1\nI0423 00:43:07.355474 2946 log.go:172] (0xc0006692c0) (1) Data frame handling\nI0423 00:43:07.355492 2946 log.go:172] (0xc0006692c0) (1) Data frame sent\nI0423 00:43:07.355519 2946 log.go:172] (0xc0009d20b0) (0xc0006692c0) Stream removed, broadcasting: 1\nI0423 00:43:07.355539 2946 log.go:172] (0xc0009d20b0) Go away received\nI0423 00:43:07.355982 2946 log.go:172] (0xc0009d20b0) (0xc0006692c0) Stream removed, broadcasting: 1\nI0423 00:43:07.356002 2946 log.go:172] (0xc0009d20b0) (0xc000992000) Stream removed, broadcasting: 3\nI0423 00:43:07.356011 2946 log.go:172] (0xc0009d20b0) (0xc0006694a0) Stream removed, broadcasting: 5\n" Apr 23 00:43:07.360: INFO: stdout: "" Apr 23 00:43:07.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5007 execpodqp8sd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31213' Apr 23 00:43:07.568: INFO: stderr: "I0423 00:43:07.495225 2968 log.go:172] (0xc00003a420) (0xc000811220) Create stream\nI0423 00:43:07.495308 2968 log.go:172] (0xc00003a420) (0xc000811220) Stream added, broadcasting: 1\nI0423 00:43:07.497683 2968 log.go:172] (0xc00003a420) Reply frame received for 1\nI0423 00:43:07.497732 2968 log.go:172] (0xc00003a420) (0xc00091a000) Create stream\nI0423 00:43:07.497750 2968 log.go:172] (0xc00003a420) (0xc00091a000) Stream added, broadcasting: 3\nI0423 00:43:07.498703 2968 log.go:172] (0xc00003a420) Reply frame received for 3\nI0423 00:43:07.498748 2968 log.go:172] (0xc00003a420) (0xc000811400) Create stream\nI0423 00:43:07.498765 2968 log.go:172] (0xc00003a420) (0xc000811400) Stream added, broadcasting: 5\nI0423 00:43:07.499571 2968 log.go:172] (0xc00003a420) Reply frame received for 5\nI0423 00:43:07.561643 2968 log.go:172] (0xc00003a420) Data frame received for 3\nI0423 00:43:07.561675 2968 log.go:172] (0xc00091a000) (3) Data frame handling\nI0423 00:43:07.561696 2968 log.go:172] (0xc00003a420) Data frame received for 5\nI0423 00:43:07.561703 2968 log.go:172] (0xc000811400) (5) Data frame handling\nI0423 00:43:07.561711 2968 log.go:172] (0xc000811400) (5) Data frame sent\nI0423 00:43:07.561719 2968 log.go:172] (0xc00003a420) Data frame received for 5\nI0423 00:43:07.561724 2968 log.go:172] (0xc000811400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31213\nConnection to 172.17.0.13 31213 port [tcp/31213] succeeded!\nI0423 00:43:07.563222 2968 log.go:172] (0xc00003a420) Data frame received for 1\nI0423 00:43:07.563241 2968 log.go:172] (0xc000811220) (1) Data frame handling\nI0423 00:43:07.563253 2968 log.go:172] (0xc000811220) (1) Data frame sent\nI0423 00:43:07.563261 2968 log.go:172] (0xc00003a420) (0xc000811220) Stream removed, broadcasting: 1\nI0423 00:43:07.563274 2968 log.go:172] (0xc00003a420) Go away received\nI0423 00:43:07.563731 2968 log.go:172] (0xc00003a420) (0xc000811220) Stream removed, broadcasting: 1\nI0423 00:43:07.563775 2968 log.go:172] (0xc00003a420) (0xc00091a000) Stream removed, broadcasting: 3\nI0423 00:43:07.563787 2968 log.go:172] (0xc00003a420) (0xc000811400) Stream removed, broadcasting: 5\n" Apr 23 00:43:07.568: INFO: stdout: "" Apr 23 00:43:07.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5007 execpodqp8sd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31213' Apr 23 00:43:07.791: INFO: stderr: "I0423 00:43:07.712355 2989 log.go:172] (0xc0000ebef0) (0xc0009ca000) Create stream\nI0423 00:43:07.712414 2989 log.go:172] (0xc0000ebef0) (0xc0009ca000) Stream added, broadcasting: 1\nI0423 00:43:07.715004 2989 log.go:172] (0xc0000ebef0) Reply frame received for 1\nI0423 00:43:07.715030 2989 log.go:172] (0xc0000ebef0) (0xc0009ca0a0) Create stream\nI0423 00:43:07.715038 2989 log.go:172] (0xc0000ebef0) (0xc0009ca0a0) Stream added, broadcasting: 3\nI0423 00:43:07.715715 2989 log.go:172] (0xc0000ebef0) Reply frame received for 3\nI0423 00:43:07.715741 2989 log.go:172] (0xc0000ebef0) (0xc0009ca140) Create stream\nI0423 00:43:07.715748 2989 log.go:172] (0xc0000ebef0) (0xc0009ca140) Stream added, broadcasting: 5\nI0423 00:43:07.716375 2989 log.go:172] (0xc0000ebef0) Reply frame received for 5\nI0423 00:43:07.781289 2989 log.go:172] (0xc0000ebef0) Data frame received for 5\nI0423 00:43:07.781322 2989 log.go:172] (0xc0009ca140) (5) Data frame handling\nI0423 00:43:07.781340 2989 log.go:172] (0xc0009ca140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31213\nI0423 00:43:07.785475 2989 log.go:172] (0xc0000ebef0) Data frame received for 3\nI0423 00:43:07.785562 2989 log.go:172] (0xc0000ebef0) Data frame received for 5\nI0423 00:43:07.785588 2989 log.go:172] (0xc0009ca140) (5) Data frame handling\nI0423 00:43:07.785603 2989 log.go:172] (0xc0009ca140) (5) Data frame sent\nI0423 00:43:07.785613 2989 log.go:172] (0xc0000ebef0) Data frame received for 5\nI0423 00:43:07.785622 2989 log.go:172] (0xc0009ca140) (5) Data frame handling\nConnection to 172.17.0.12 31213 port [tcp/31213] succeeded!\nI0423 00:43:07.785642 2989 log.go:172] (0xc0009ca0a0) (3) Data frame handling\nI0423 00:43:07.785659 2989 log.go:172] (0xc0000ebef0) Data frame received for 1\nI0423 00:43:07.785671 2989 log.go:172] (0xc0009ca000) (1) Data frame handling\nI0423 00:43:07.785681 2989 log.go:172] (0xc0009ca000) (1) Data frame sent\nI0423 00:43:07.785696 2989 log.go:172] (0xc0000ebef0) (0xc0009ca000) Stream removed, broadcasting: 1\nI0423 00:43:07.785711 2989 log.go:172] (0xc0000ebef0) Go away received\nI0423 00:43:07.786196 2989 log.go:172] (0xc0000ebef0) (0xc0009ca000) Stream removed, broadcasting: 1\nI0423 00:43:07.786215 2989 log.go:172] (0xc0000ebef0) (0xc0009ca0a0) Stream removed, broadcasting: 3\nI0423 00:43:07.786225 2989 log.go:172] (0xc0000ebef0) (0xc0009ca140) Stream removed, broadcasting: 5\n" Apr 23 00:43:07.791: INFO: stdout: "" Apr 23 00:43:07.791: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:43:07.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5007" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.164 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":238,"skipped":3979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:43:07.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:43:07.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5437" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":239,"skipped":4021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:43:08.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 23 00:43:08.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8079 /api/v1/namespaces/watch-8079/configmaps/e2e-watch-test-watch-closed 99887785-a7d6-4448-b2e5-cb45008e6749 10268408 0 2020-04-23 00:43:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:43:08.085: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8079 /api/v1/namespaces/watch-8079/configmaps/e2e-watch-test-watch-closed 99887785-a7d6-4448-b2e5-cb45008e6749 10268409 0 2020-04-23 00:43:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 23 00:43:08.096: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8079 /api/v1/namespaces/watch-8079/configmaps/e2e-watch-test-watch-closed 99887785-a7d6-4448-b2e5-cb45008e6749 10268410 0 2020-04-23 00:43:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 23 00:43:08.096: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8079 /api/v1/namespaces/watch-8079/configmaps/e2e-watch-test-watch-closed 99887785-a7d6-4448-b2e5-cb45008e6749 10268411 0 2020-04-23 00:43:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:43:08.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8079" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":240,"skipped":4052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:43:08.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 23 00:43:08.173: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 00:43:08.185: INFO: Waiting for terminating namespaces to be deleted... Apr 23 00:43:08.187: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 23 00:43:08.193: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.193: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 00:43:08.193: INFO: externalname-service-sj7l5 from services-5007 started at 2020-04-23 00:42:55 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.193: INFO: Container externalname-service ready: true, restart count 0 Apr 23 00:43:08.193: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.193: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:43:08.193: INFO: execpodqp8sd from services-5007 started at 2020-04-23 00:43:01 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.193: INFO: Container agnhost-pause ready: true, restart count 0 Apr 23 00:43:08.193: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 23 00:43:08.198: INFO: externalname-service-t4slm from services-5007 started at 2020-04-23 00:42:55 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.198: INFO: Container externalname-service ready: true, restart count 0 Apr 23 00:43:08.198: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.198: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 00:43:08.198: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 23 00:43:08.198: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16084c247fed59ed], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:43:09.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3677" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":241,"skipped":4077,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:43:09.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6298 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-6298 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6298 Apr 23 00:43:09.481: INFO: Found 0 stateful pods, waiting for 1 Apr 23 00:43:19.485: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 23 00:43:19.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:43:19.760: INFO: stderr: "I0423 00:43:19.624057 3010 log.go:172] (0xc00098c630) (0xc0007055e0) Create stream\nI0423 00:43:19.624106 3010 log.go:172] (0xc00098c630) (0xc0007055e0) Stream added, broadcasting: 1\nI0423 00:43:19.626721 3010 log.go:172] (0xc00098c630) Reply frame received for 1\nI0423 00:43:19.626768 3010 log.go:172] (0xc00098c630) (0xc0004d8aa0) Create stream\nI0423 00:43:19.626790 3010 log.go:172] (0xc00098c630) (0xc0004d8aa0) Stream added, broadcasting: 3\nI0423 00:43:19.627744 3010 log.go:172] (0xc00098c630) Reply frame received for 3\nI0423 00:43:19.627804 3010 log.go:172] (0xc00098c630) (0xc0003e2000) Create stream\nI0423 00:43:19.627821 3010 log.go:172] (0xc00098c630) (0xc0003e2000) Stream added, broadcasting: 5\nI0423 00:43:19.628769 3010 log.go:172] (0xc00098c630) Reply frame received for 5\nI0423 00:43:19.721751 3010 log.go:172] (0xc00098c630) Data frame received for 5\nI0423 00:43:19.721786 3010 log.go:172] (0xc0003e2000) (5) Data frame handling\nI0423 00:43:19.721805 3010 log.go:172] (0xc0003e2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:43:19.754111 3010 log.go:172] (0xc00098c630) Data frame received for 5\nI0423 00:43:19.754233 3010 log.go:172] (0xc0003e2000) (5) Data frame handling\nI0423 00:43:19.754279 3010 log.go:172] (0xc00098c630) Data frame received for 3\nI0423 00:43:19.754300 3010 log.go:172] (0xc0004d8aa0) (3) Data frame handling\nI0423 00:43:19.754328 3010 log.go:172] (0xc0004d8aa0) (3) Data frame sent\nI0423 00:43:19.754351 3010 log.go:172] (0xc00098c630) Data frame received for 3\nI0423 00:43:19.754370 3010 log.go:172] (0xc0004d8aa0) (3) Data frame handling\nI0423 00:43:19.756162 3010 log.go:172] (0xc00098c630) Data frame received for 1\nI0423 00:43:19.756189 3010 log.go:172] (0xc0007055e0) (1) Data frame handling\nI0423 00:43:19.756203 3010 log.go:172] (0xc0007055e0) (1) Data frame sent\nI0423 00:43:19.756229 3010 log.go:172] (0xc00098c630) (0xc0007055e0) Stream removed, broadcasting: 1\nI0423 00:43:19.756258 3010 log.go:172] (0xc00098c630) Go away received\nI0423 00:43:19.756562 3010 log.go:172] (0xc00098c630) (0xc0007055e0) Stream removed, broadcasting: 1\nI0423 00:43:19.756576 3010 log.go:172] (0xc00098c630) (0xc0004d8aa0) Stream removed, broadcasting: 3\nI0423 00:43:19.756583 3010 log.go:172] (0xc00098c630) (0xc0003e2000) Stream removed, broadcasting: 5\n" Apr 23 00:43:19.760: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:43:19.760: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:43:19.764: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 23 00:43:29.769: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:43:29.769: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:43:29.790: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:43:29.790: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:43:29.791: INFO: ss-1 Pending [] Apr 23 00:43:29.791: INFO: Apr 23 00:43:29.791: INFO: StatefulSet ss has not reached scale 3, at 2 Apr 23 00:43:30.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989357112s Apr 23 00:43:31.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.944355681s Apr 23 00:43:32.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.848343387s Apr 23 00:43:33.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.833745291s Apr 23 00:43:34.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.82959716s Apr 23 00:43:35.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.824538288s Apr 23 00:43:36.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.820655976s Apr 23 00:43:37.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.816463013s Apr 23 00:43:38.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 811.800085ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6298 Apr 23 00:43:39.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:43:40.221: INFO: stderr: "I0423 00:43:40.106529 3031 log.go:172] (0xc0009514a0) (0xc000a0c820) Create stream\nI0423 00:43:40.106579 3031 log.go:172] (0xc0009514a0) (0xc000a0c820) Stream added, broadcasting: 1\nI0423 00:43:40.111673 3031 log.go:172] (0xc0009514a0) Reply frame received for 1\nI0423 00:43:40.111715 3031 log.go:172] (0xc0009514a0) (0xc0005fd680) Create stream\nI0423 00:43:40.111726 3031 log.go:172] (0xc0009514a0) (0xc0005fd680) Stream added, broadcasting: 3\nI0423 00:43:40.112706 3031 log.go:172] (0xc0009514a0) Reply frame received for 3\nI0423 00:43:40.112736 3031 log.go:172] (0xc0009514a0) (0xc0003b2aa0) Create stream\nI0423 00:43:40.112744 3031 log.go:172] (0xc0009514a0) (0xc0003b2aa0) Stream added, broadcasting: 5\nI0423 00:43:40.114136 3031 log.go:172] (0xc0009514a0) Reply frame received for 5\nI0423 00:43:40.213439 3031 log.go:172] (0xc0009514a0) Data frame received for 5\nI0423 00:43:40.213496 3031 log.go:172] (0xc0003b2aa0) (5) Data frame handling\nI0423 00:43:40.213519 3031 log.go:172] (0xc0003b2aa0) (5) Data frame sent\nI0423 00:43:40.213537 3031 log.go:172] (0xc0009514a0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0423 00:43:40.213557 3031 log.go:172] (0xc0009514a0) Data frame received for 3\nI0423 00:43:40.213602 3031 log.go:172] (0xc0005fd680) (3) Data frame handling\nI0423 00:43:40.213630 3031 log.go:172] (0xc0005fd680) (3) Data frame sent\nI0423 00:43:40.213668 3031 log.go:172] (0xc0009514a0) Data frame received for 3\nI0423 00:43:40.213685 3031 log.go:172] (0xc0005fd680) (3) Data frame handling\nI0423 00:43:40.213713 3031 log.go:172] (0xc0003b2aa0) (5) Data frame handling\nI0423 00:43:40.215090 3031 log.go:172] (0xc0009514a0) Data frame received for 1\nI0423 00:43:40.215130 3031 log.go:172] (0xc000a0c820) (1) Data frame handling\nI0423 00:43:40.215167 3031 log.go:172] (0xc000a0c820) (1) Data frame sent\nI0423 00:43:40.215192 3031 log.go:172] (0xc0009514a0) (0xc000a0c820) Stream removed, broadcasting: 1\nI0423 00:43:40.215528 3031 log.go:172] (0xc0009514a0) Go away received\nI0423 00:43:40.215581 3031 log.go:172] (0xc0009514a0) (0xc000a0c820) Stream removed, broadcasting: 1\nI0423 00:43:40.215611 3031 log.go:172] (0xc0009514a0) (0xc0005fd680) Stream removed, broadcasting: 3\nI0423 00:43:40.215622 3031 log.go:172] (0xc0009514a0) (0xc0003b2aa0) Stream removed, broadcasting: 5\n" Apr 23 00:43:40.221: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:43:40.221: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:43:40.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:43:40.425: INFO: stderr: "I0423 00:43:40.349014 3050 log.go:172] (0xc0009a48f0) (0xc0009cc3c0) Create stream\nI0423 00:43:40.349091 3050 log.go:172] (0xc0009a48f0) (0xc0009cc3c0) Stream added, broadcasting: 1\nI0423 00:43:40.353022 3050 log.go:172] (0xc0009a48f0) Reply frame received for 1\nI0423 00:43:40.353066 3050 log.go:172] (0xc0009a48f0) (0xc00056d720) Create stream\nI0423 00:43:40.353079 3050 log.go:172] (0xc0009a48f0) (0xc00056d720) Stream added, broadcasting: 3\nI0423 00:43:40.354038 3050 log.go:172] (0xc0009a48f0) Reply frame received for 3\nI0423 00:43:40.354072 3050 log.go:172] (0xc0009a48f0) (0xc000444b40) Create stream\nI0423 00:43:40.354081 3050 log.go:172] (0xc0009a48f0) (0xc000444b40) Stream added, broadcasting: 5\nI0423 00:43:40.354903 3050 log.go:172] (0xc0009a48f0) Reply frame received for 5\nI0423 00:43:40.417817 3050 log.go:172] (0xc0009a48f0) Data frame received for 3\nI0423 00:43:40.417857 3050 log.go:172] (0xc00056d720) (3) Data frame handling\nI0423 00:43:40.417870 3050 log.go:172] (0xc00056d720) (3) Data frame sent\nI0423 00:43:40.417878 3050 log.go:172] (0xc0009a48f0) Data frame received for 3\nI0423 00:43:40.417888 3050 log.go:172] (0xc00056d720) (3) Data frame handling\nI0423 00:43:40.417918 3050 log.go:172] (0xc0009a48f0) Data frame received for 5\nI0423 00:43:40.417935 3050 log.go:172] (0xc000444b40) (5) Data frame handling\nI0423 00:43:40.417950 3050 log.go:172] (0xc000444b40) (5) Data frame sent\nI0423 00:43:40.417961 3050 log.go:172] (0xc0009a48f0) Data frame received for 5\nI0423 00:43:40.417971 3050 log.go:172] (0xc000444b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0423 00:43:40.419885 3050 log.go:172] (0xc0009a48f0) Data frame received for 1\nI0423 00:43:40.419929 3050 log.go:172] (0xc0009cc3c0) (1) Data frame handling\nI0423 00:43:40.419959 3050 log.go:172] (0xc0009cc3c0) (1) Data frame sent\nI0423 00:43:40.420004 3050 log.go:172] (0xc0009a48f0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0423 00:43:40.420045 3050 log.go:172] (0xc0009a48f0) Go away received\nI0423 00:43:40.420414 3050 log.go:172] (0xc0009a48f0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0423 00:43:40.420436 3050 log.go:172] (0xc0009a48f0) (0xc00056d720) Stream removed, broadcasting: 3\nI0423 00:43:40.420448 3050 log.go:172] (0xc0009a48f0) (0xc000444b40) Stream removed, broadcasting: 5\n" Apr 23 00:43:40.425: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:43:40.425: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:43:40.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:43:40.661: INFO: stderr: "I0423 00:43:40.572750 3070 log.go:172] (0xc00003a160) (0xc0008c4280) Create stream\nI0423 00:43:40.572831 3070 log.go:172] (0xc00003a160) (0xc0008c4280) Stream added, broadcasting: 1\nI0423 00:43:40.576485 3070 log.go:172] (0xc00003a160) Reply frame received for 1\nI0423 00:43:40.576522 3070 log.go:172] (0xc00003a160) (0xc0008c4320) Create stream\nI0423 00:43:40.576531 3070 log.go:172] (0xc00003a160) (0xc0008c4320) Stream added, broadcasting: 3\nI0423 00:43:40.577659 3070 log.go:172] (0xc00003a160) Reply frame received for 3\nI0423 00:43:40.577706 3070 log.go:172] (0xc00003a160) (0xc00059d860) Create stream\nI0423 00:43:40.577727 3070 log.go:172] (0xc00003a160) (0xc00059d860) Stream added, broadcasting: 5\nI0423 00:43:40.578523 3070 log.go:172] (0xc00003a160) Reply frame received for 5\nI0423 00:43:40.655475 3070 log.go:172] (0xc00003a160) Data frame received for 5\nI0423 00:43:40.655498 3070 log.go:172] (0xc00059d860) (5) Data frame handling\nI0423 00:43:40.655505 3070 log.go:172] (0xc00059d860) (5) Data frame sent\nI0423 00:43:40.655510 3070 log.go:172] (0xc00003a160) Data frame received for 5\nI0423 00:43:40.655514 3070 log.go:172] (0xc00059d860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0423 00:43:40.655531 3070 log.go:172] (0xc00003a160) Data frame received for 3\nI0423 00:43:40.655535 3070 log.go:172] (0xc0008c4320) (3) Data frame handling\nI0423 00:43:40.655542 3070 log.go:172] (0xc0008c4320) (3) Data frame sent\nI0423 00:43:40.655547 3070 log.go:172] (0xc00003a160) Data frame received for 3\nI0423 00:43:40.655551 3070 log.go:172] (0xc0008c4320) (3) Data frame handling\nI0423 00:43:40.656974 3070 log.go:172] (0xc00003a160) Data frame received for 1\nI0423 00:43:40.656997 3070 log.go:172] (0xc0008c4280) (1) Data frame handling\nI0423 00:43:40.657023 3070 log.go:172] (0xc0008c4280) (1) Data frame sent\nI0423 00:43:40.657051 3070 log.go:172] (0xc00003a160) (0xc0008c4280) Stream removed, broadcasting: 1\nI0423 00:43:40.657069 3070 log.go:172] (0xc00003a160) Go away received\nI0423 00:43:40.657483 3070 log.go:172] (0xc00003a160) (0xc0008c4280) Stream removed, broadcasting: 1\nI0423 00:43:40.657516 3070 log.go:172] (0xc00003a160) (0xc0008c4320) Stream removed, broadcasting: 3\nI0423 00:43:40.657533 3070 log.go:172] (0xc00003a160) (0xc00059d860) Stream removed, broadcasting: 5\n" Apr 23 00:43:40.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 23 00:43:40.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 23 00:43:40.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 23 00:43:50.669: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:43:50.669: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 00:43:50.669: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 23 00:43:50.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:43:50.901: INFO: stderr: "I0423 00:43:50.795648 3090 log.go:172] (0xc0009f46e0) (0xc0008a0140) Create stream\nI0423 00:43:50.795703 3090 log.go:172] (0xc0009f46e0) (0xc0008a0140) Stream added, broadcasting: 1\nI0423 00:43:50.798778 3090 log.go:172] (0xc0009f46e0) Reply frame received for 1\nI0423 00:43:50.798826 3090 log.go:172] (0xc0009f46e0) (0xc0008a01e0) Create stream\nI0423 00:43:50.798840 3090 log.go:172] (0xc0009f46e0) (0xc0008a01e0) Stream added, broadcasting: 3\nI0423 00:43:50.800079 3090 log.go:172] (0xc0009f46e0) Reply frame received for 3\nI0423 00:43:50.800125 3090 log.go:172] (0xc0009f46e0) (0xc000a14000) Create stream\nI0423 00:43:50.800141 3090 log.go:172] (0xc0009f46e0) (0xc000a14000) Stream added, broadcasting: 5\nI0423 00:43:50.801092 3090 log.go:172] (0xc0009f46e0) Reply frame received for 5\nI0423 00:43:50.894964 3090 log.go:172] (0xc0009f46e0) Data frame received for 3\nI0423 00:43:50.894998 3090 log.go:172] (0xc0008a01e0) (3) Data frame handling\nI0423 00:43:50.895013 3090 log.go:172] (0xc0008a01e0) (3) Data frame sent\nI0423 00:43:50.895025 3090 log.go:172] (0xc0009f46e0) Data frame received for 3\nI0423 00:43:50.895034 3090 log.go:172] (0xc0008a01e0) (3) Data frame handling\nI0423 00:43:50.895046 3090 log.go:172] (0xc0009f46e0) Data frame received for 5\nI0423 00:43:50.895056 3090 log.go:172] (0xc000a14000) (5) Data frame handling\nI0423 00:43:50.895067 3090 log.go:172] (0xc000a14000) (5) Data frame sent\nI0423 00:43:50.895077 3090 log.go:172] (0xc0009f46e0) Data frame received for 5\nI0423 00:43:50.895086 3090 log.go:172] (0xc000a14000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:43:50.896844 3090 log.go:172] (0xc0009f46e0) Data frame received for 1\nI0423 00:43:50.896870 3090 log.go:172] (0xc0008a0140) (1) Data frame handling\nI0423 00:43:50.896885 3090 log.go:172] (0xc0008a0140) (1) Data frame sent\nI0423 00:43:50.896902 3090 log.go:172] (0xc0009f46e0) (0xc0008a0140) Stream removed, broadcasting: 1\nI0423 00:43:50.896916 3090 log.go:172] (0xc0009f46e0) Go away received\nI0423 00:43:50.897323 3090 log.go:172] (0xc0009f46e0) (0xc0008a0140) Stream removed, broadcasting: 1\nI0423 00:43:50.897342 3090 log.go:172] (0xc0009f46e0) (0xc0008a01e0) Stream removed, broadcasting: 3\nI0423 00:43:50.897350 3090 log.go:172] (0xc0009f46e0) (0xc000a14000) Stream removed, broadcasting: 5\n" Apr 23 00:43:50.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:43:50.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:43:50.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:43:51.141: INFO: stderr: "I0423 00:43:51.031558 3111 log.go:172] (0xc000bd1340) (0xc000970780) Create stream\nI0423 00:43:51.031621 3111 log.go:172] (0xc000bd1340) (0xc000970780) Stream added, broadcasting: 1\nI0423 00:43:51.036261 3111 log.go:172] (0xc000bd1340) Reply frame received for 1\nI0423 00:43:51.036312 3111 log.go:172] (0xc000bd1340) (0xc0006b1540) Create stream\nI0423 00:43:51.036329 3111 log.go:172] (0xc000bd1340) (0xc0006b1540) Stream added, broadcasting: 3\nI0423 00:43:51.037413 3111 log.go:172] (0xc000bd1340) Reply frame received for 3\nI0423 00:43:51.037462 3111 log.go:172] (0xc000bd1340) (0xc0005b4960) Create stream\nI0423 00:43:51.037481 3111 log.go:172] (0xc000bd1340) (0xc0005b4960) Stream added, broadcasting: 5\nI0423 00:43:51.038435 3111 log.go:172] (0xc000bd1340) Reply frame received for 5\nI0423 00:43:51.106613 3111 log.go:172] (0xc000bd1340) Data frame received for 5\nI0423 00:43:51.106649 3111 log.go:172] (0xc0005b4960) (5) Data frame handling\nI0423 00:43:51.106672 3111 log.go:172] (0xc0005b4960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:43:51.132495 3111 log.go:172] (0xc000bd1340) Data frame received for 3\nI0423 00:43:51.132538 3111 log.go:172] (0xc0006b1540) (3) Data frame handling\nI0423 00:43:51.132568 3111 log.go:172] (0xc0006b1540) (3) Data frame sent\nI0423 00:43:51.132589 3111 log.go:172] (0xc000bd1340) Data frame received for 3\nI0423 00:43:51.132710 3111 log.go:172] (0xc000bd1340) Data frame received for 5\nI0423 00:43:51.132733 3111 log.go:172] (0xc0005b4960) (5) Data frame handling\nI0423 00:43:51.132755 3111 log.go:172] (0xc0006b1540) (3) Data frame handling\nI0423 00:43:51.134682 3111 log.go:172] (0xc000bd1340) Data frame received for 1\nI0423 00:43:51.134711 3111 log.go:172] (0xc000970780) (1) Data frame handling\nI0423 00:43:51.134775 3111 log.go:172] (0xc000970780) (1) Data frame sent\nI0423 00:43:51.134879 3111 log.go:172] (0xc000bd1340) (0xc000970780) Stream removed, broadcasting: 1\nI0423 00:43:51.134904 3111 log.go:172] (0xc000bd1340) Go away received\nI0423 00:43:51.135372 3111 log.go:172] (0xc000bd1340) (0xc000970780) Stream removed, broadcasting: 1\nI0423 00:43:51.135402 3111 log.go:172] (0xc000bd1340) (0xc0006b1540) Stream removed, broadcasting: 3\nI0423 00:43:51.135415 3111 log.go:172] (0xc000bd1340) (0xc0005b4960) Stream removed, broadcasting: 5\n" Apr 23 00:43:51.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:43:51.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:43:51.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 23 00:43:51.359: INFO: stderr: "I0423 00:43:51.274305 3131 log.go:172] (0xc000a8afd0) (0xc000a98500) Create stream\nI0423 00:43:51.274352 3131 log.go:172] (0xc000a8afd0) (0xc000a98500) Stream added, broadcasting: 1\nI0423 00:43:51.278744 3131 log.go:172] (0xc000a8afd0) Reply frame received for 1\nI0423 00:43:51.278817 3131 log.go:172] (0xc000a8afd0) (0xc0006ed5e0) Create stream\nI0423 00:43:51.278835 3131 log.go:172] (0xc000a8afd0) (0xc0006ed5e0) Stream added, broadcasting: 3\nI0423 00:43:51.279516 3131 log.go:172] (0xc000a8afd0) Reply frame received for 3\nI0423 00:43:51.279547 3131 log.go:172] (0xc000a8afd0) (0xc0005b8a00) Create stream\nI0423 00:43:51.279555 3131 log.go:172] (0xc000a8afd0) (0xc0005b8a00) Stream added, broadcasting: 5\nI0423 00:43:51.280189 3131 log.go:172] (0xc000a8afd0) Reply frame received for 5\nI0423 00:43:51.323036 3131 log.go:172] (0xc000a8afd0) Data frame received for 5\nI0423 00:43:51.323060 3131 log.go:172] (0xc0005b8a00) (5) Data frame handling\nI0423 00:43:51.323081 3131 log.go:172] (0xc0005b8a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0423 00:43:51.352200 3131 log.go:172] (0xc000a8afd0) Data frame received for 3\nI0423 00:43:51.352231 3131 log.go:172] (0xc0006ed5e0) (3) Data frame handling\nI0423 00:43:51.352240 3131 log.go:172] (0xc0006ed5e0) (3) Data frame sent\nI0423 00:43:51.352247 3131 log.go:172] (0xc000a8afd0) Data frame received for 3\nI0423 00:43:51.352252 3131 log.go:172] (0xc0006ed5e0) (3) Data frame handling\nI0423 00:43:51.352295 3131 log.go:172] (0xc000a8afd0) Data frame received for 5\nI0423 00:43:51.352320 3131 log.go:172] (0xc0005b8a00) (5) Data frame handling\nI0423 00:43:51.354126 3131 log.go:172] (0xc000a8afd0) Data frame received for 1\nI0423 00:43:51.354171 3131 log.go:172] (0xc000a98500) (1) Data frame handling\nI0423 00:43:51.354221 3131 log.go:172] (0xc000a98500) (1) Data frame sent\nI0423 00:43:51.354250 3131 log.go:172] (0xc000a8afd0) (0xc000a98500) Stream removed, broadcasting: 1\nI0423 00:43:51.354293 3131 log.go:172] (0xc000a8afd0) Go away received\nI0423 00:43:51.354628 3131 log.go:172] (0xc000a8afd0) (0xc000a98500) Stream removed, broadcasting: 1\nI0423 00:43:51.354650 3131 log.go:172] (0xc000a8afd0) (0xc0006ed5e0) Stream removed, broadcasting: 3\nI0423 00:43:51.354659 3131 log.go:172] (0xc000a8afd0) (0xc0005b8a00) Stream removed, broadcasting: 5\n" Apr 23 00:43:51.360: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 23 00:43:51.360: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 23 00:43:51.360: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:43:51.363: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 23 00:44:01.372: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:44:01.372: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:44:01.372: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 23 00:44:01.401: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:01.401: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:01.401: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:01.401: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:01.401: INFO: Apr 23 00:44:01.401: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:02.406: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:02.406: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:02.406: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:02.406: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:02.406: INFO: Apr 23 00:44:02.406: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:03.411: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:03.411: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:03.411: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:03.411: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:03.411: INFO: Apr 23 00:44:03.411: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:04.415: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:04.416: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:04.416: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:04.416: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:04.416: INFO: Apr 23 00:44:04.416: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:05.421: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:05.421: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:05.421: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:05.421: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:05.421: INFO: Apr 23 00:44:05.421: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:06.426: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:06.426: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:06.426: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:06.426: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:06.426: INFO: Apr 23 00:44:06.426: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:07.429: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:07.429: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:07.429: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:07.429: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:07.429: INFO: Apr 23 00:44:07.429: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:08.433: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:08.433: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:08.433: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:08.433: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:08.433: INFO: Apr 23 00:44:08.433: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:09.439: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:09.439: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:09.439: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:09.439: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:09.439: INFO: Apr 23 00:44:09.439: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 00:44:10.444: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 00:44:10.444: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:09 +0000 UTC }] Apr 23 00:44:10.444: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:10.444: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 00:43:29 +0000 UTC }] Apr 23 00:44:10.444: INFO: Apr 23 00:44:10.444: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6298 Apr 23 00:44:11.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:44:11.585: INFO: rc: 1 Apr 23 00:44:11.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 23 00:44:21.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:44:21.676: INFO: rc: 1 Apr 23 00:44:21.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:44:31.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:44:31.764: INFO: rc: 1 Apr 23 00:44:31.764: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:44:41.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:44:41.862: INFO: rc: 1 Apr 23 00:44:41.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:44:51.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:44:51.956: INFO: rc: 1 Apr 23 00:44:51.956: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:01.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:02.051: INFO: rc: 1 Apr 23 00:45:02.051: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:12.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:12.177: INFO: rc: 1 Apr 23 00:45:12.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:22.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:22.287: INFO: rc: 1 Apr 23 00:45:22.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:32.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:32.397: INFO: rc: 1 Apr 23 00:45:32.397: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:42.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:42.512: INFO: rc: 1 Apr 23 00:45:42.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:45:52.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:45:52.616: INFO: rc: 1 Apr 23 00:45:52.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:02.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:02.710: INFO: rc: 1 Apr 23 00:46:02.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:12.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:12.813: INFO: rc: 1 Apr 23 00:46:12.814: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:22.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:22.913: INFO: rc: 1 Apr 23 00:46:22.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:32.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:33.024: INFO: rc: 1 Apr 23 00:46:33.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:43.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:43.145: INFO: rc: 1 Apr 23 00:46:43.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:46:53.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:46:53.236: INFO: rc: 1 Apr 23 00:46:53.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:03.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:03.336: INFO: rc: 1 Apr 23 00:47:03.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:13.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:13.446: INFO: rc: 1 Apr 23 00:47:13.446: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:23.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:23.561: INFO: rc: 1 Apr 23 00:47:23.561: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:33.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:33.663: INFO: rc: 1 Apr 23 00:47:33.663: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:43.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:43.757: INFO: rc: 1 Apr 23 00:47:43.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:47:53.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:47:53.874: INFO: rc: 1 Apr 23 00:47:53.874: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:03.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:03.973: INFO: rc: 1 Apr 23 00:48:03.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:13.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:14.073: INFO: rc: 1 Apr 23 00:48:14.073: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:24.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:24.176: INFO: rc: 1 Apr 23 00:48:24.176: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:34.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:34.287: INFO: rc: 1 Apr 23 00:48:34.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:44.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:44.386: INFO: rc: 1 Apr 23 00:48:44.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:48:54.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:48:54.489: INFO: rc: 1 Apr 23 00:48:54.489: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:49:04.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:49:04.578: INFO: rc: 1 Apr 23 00:49:04.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 00:49:14.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6298 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 23 00:49:14.671: INFO: rc: 1 Apr 23 00:49:14.671: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Apr 23 00:49:14.671: INFO: Scaling statefulset ss to 0 Apr 23 00:49:14.680: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 23 00:49:14.682: INFO: Deleting all statefulset in ns statefulset-6298 Apr 23 00:49:14.685: INFO: Scaling statefulset ss to 0 Apr 23 00:49:14.692: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 00:49:14.695: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:49:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6298" for this suite. • [SLOW TEST:365.405 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":242,"skipped":4099,"failed":0} [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:49:14.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 23 00:49:24.821: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:24.821: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:24.859611 7 log.go:172] (0xc00277a4d0) (0xc0017821e0) Create stream I0423 00:49:24.859640 7 log.go:172] (0xc00277a4d0) (0xc0017821e0) Stream added, broadcasting: 1 I0423 00:49:24.861660 7 log.go:172] (0xc00277a4d0) Reply frame received for 1 I0423 00:49:24.861704 7 log.go:172] (0xc00277a4d0) (0xc00255e000) Create stream I0423 00:49:24.861721 7 log.go:172] (0xc00277a4d0) (0xc00255e000) Stream added, broadcasting: 3 I0423 00:49:24.862618 7 log.go:172] (0xc00277a4d0) Reply frame received for 3 I0423 00:49:24.862659 7 log.go:172] (0xc00277a4d0) (0xc001782320) Create stream I0423 00:49:24.862671 7 log.go:172] (0xc00277a4d0) (0xc001782320) Stream added, broadcasting: 5 I0423 00:49:24.863502 7 log.go:172] (0xc00277a4d0) Reply frame received for 5 I0423 00:49:24.957379 7 log.go:172] (0xc00277a4d0) Data frame received for 5 I0423 00:49:24.957401 7 log.go:172] (0xc001782320) (5) Data frame handling I0423 00:49:24.957452 7 log.go:172] (0xc00277a4d0) Data frame received for 3 I0423 00:49:24.957488 7 log.go:172] (0xc00255e000) (3) Data frame handling I0423 00:49:24.957514 7 log.go:172] (0xc00255e000) (3) Data frame sent I0423 00:49:24.957537 7 log.go:172] (0xc00277a4d0) Data frame received for 3 I0423 00:49:24.957552 7 log.go:172] (0xc00255e000) (3) Data frame handling I0423 00:49:24.958956 7 log.go:172] (0xc00277a4d0) Data frame received for 1 I0423 00:49:24.958980 7 log.go:172] (0xc0017821e0) (1) Data frame handling I0423 00:49:24.958999 7 log.go:172] (0xc0017821e0) (1) Data frame sent I0423 00:49:24.959046 7 log.go:172] (0xc00277a4d0) (0xc0017821e0) Stream removed, broadcasting: 1 I0423 00:49:24.959085 7 log.go:172] (0xc00277a4d0) Go away received I0423 00:49:24.959151 7 log.go:172] (0xc00277a4d0) (0xc0017821e0) Stream removed, broadcasting: 1 I0423 00:49:24.959170 7 log.go:172] (0xc00277a4d0) (0xc00255e000) Stream removed, broadcasting: 3 I0423 00:49:24.959182 7 log.go:172] (0xc00277a4d0) (0xc001782320) Stream removed, broadcasting: 5 Apr 23 00:49:24.959: INFO: Exec stderr: "" Apr 23 00:49:24.959: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:24.959: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:24.990400 7 log.go:172] (0xc0023cd340) (0xc00291e0a0) Create stream I0423 00:49:24.990430 7 log.go:172] (0xc0023cd340) (0xc00291e0a0) Stream added, broadcasting: 1 I0423 00:49:24.992110 7 log.go:172] (0xc0023cd340) Reply frame received for 1 I0423 00:49:24.992148 7 log.go:172] (0xc0023cd340) (0xc001eca1e0) Create stream I0423 00:49:24.992161 7 log.go:172] (0xc0023cd340) (0xc001eca1e0) Stream added, broadcasting: 3 I0423 00:49:24.993062 7 log.go:172] (0xc0023cd340) Reply frame received for 3 I0423 00:49:24.993093 7 log.go:172] (0xc0023cd340) (0xc00291e140) Create stream I0423 00:49:24.993108 7 log.go:172] (0xc0023cd340) (0xc00291e140) Stream added, broadcasting: 5 I0423 00:49:24.994064 7 log.go:172] (0xc0023cd340) Reply frame received for 5 I0423 00:49:25.040177 7 log.go:172] (0xc0023cd340) Data frame received for 3 I0423 00:49:25.040246 7 log.go:172] (0xc001eca1e0) (3) Data frame handling I0423 00:49:25.040269 7 log.go:172] (0xc001eca1e0) (3) Data frame sent I0423 00:49:25.040298 7 log.go:172] (0xc0023cd340) Data frame received for 5 I0423 00:49:25.040317 7 log.go:172] (0xc00291e140) (5) Data frame handling I0423 00:49:25.040349 7 log.go:172] (0xc0023cd340) Data frame received for 3 I0423 00:49:25.040368 7 log.go:172] (0xc001eca1e0) (3) Data frame handling I0423 00:49:25.042709 7 log.go:172] (0xc0023cd340) Data frame received for 1 I0423 00:49:25.042733 7 log.go:172] (0xc00291e0a0) (1) Data frame handling I0423 00:49:25.042752 7 log.go:172] (0xc00291e0a0) (1) Data frame sent I0423 00:49:25.042786 7 log.go:172] (0xc0023cd340) (0xc00291e0a0) Stream removed, broadcasting: 1 I0423 00:49:25.042819 7 log.go:172] (0xc0023cd340) Go away received I0423 00:49:25.042963 7 log.go:172] (0xc0023cd340) (0xc00291e0a0) Stream removed, broadcasting: 1 I0423 00:49:25.042997 7 log.go:172] (0xc0023cd340) (0xc001eca1e0) Stream removed, broadcasting: 3 I0423 00:49:25.043026 7 log.go:172] (0xc0023cd340) (0xc00291e140) Stream removed, broadcasting: 5 Apr 23 00:49:25.043: INFO: Exec stderr: "" Apr 23 00:49:25.043: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.043: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.067788 7 log.go:172] (0xc002e562c0) (0xc00255e320) Create stream I0423 00:49:25.067816 7 log.go:172] (0xc002e562c0) (0xc00255e320) Stream added, broadcasting: 1 I0423 00:49:25.069670 7 log.go:172] (0xc002e562c0) Reply frame received for 1 I0423 00:49:25.069708 7 log.go:172] (0xc002e562c0) (0xc00255e3c0) Create stream I0423 00:49:25.069721 7 log.go:172] (0xc002e562c0) (0xc00255e3c0) Stream added, broadcasting: 3 I0423 00:49:25.070647 7 log.go:172] (0xc002e562c0) Reply frame received for 3 I0423 00:49:25.070674 7 log.go:172] (0xc002e562c0) (0xc00291e1e0) Create stream I0423 00:49:25.070682 7 log.go:172] (0xc002e562c0) (0xc00291e1e0) Stream added, broadcasting: 5 I0423 00:49:25.071589 7 log.go:172] (0xc002e562c0) Reply frame received for 5 I0423 00:49:25.144895 7 log.go:172] (0xc002e562c0) Data frame received for 5 I0423 00:49:25.144930 7 log.go:172] (0xc00291e1e0) (5) Data frame handling I0423 00:49:25.144975 7 log.go:172] (0xc002e562c0) Data frame received for 3 I0423 00:49:25.145005 7 log.go:172] (0xc00255e3c0) (3) Data frame handling I0423 00:49:25.145041 7 log.go:172] (0xc00255e3c0) (3) Data frame sent I0423 00:49:25.145058 7 log.go:172] (0xc002e562c0) Data frame received for 3 I0423 00:49:25.145092 7 log.go:172] (0xc00255e3c0) (3) Data frame handling I0423 00:49:25.146865 7 log.go:172] (0xc002e562c0) Data frame received for 1 I0423 00:49:25.146894 7 log.go:172] (0xc00255e320) (1) Data frame handling I0423 00:49:25.146921 7 log.go:172] (0xc00255e320) (1) Data frame sent I0423 00:49:25.146943 7 log.go:172] (0xc002e562c0) (0xc00255e320) Stream removed, broadcasting: 1 I0423 00:49:25.147014 7 log.go:172] (0xc002e562c0) Go away received I0423 00:49:25.147037 7 log.go:172] (0xc002e562c0) (0xc00255e320) Stream removed, broadcasting: 1 I0423 00:49:25.147061 7 log.go:172] (0xc002e562c0) (0xc00255e3c0) Stream removed, broadcasting: 3 I0423 00:49:25.147080 7 log.go:172] (0xc002e562c0) (0xc00291e1e0) Stream removed, broadcasting: 5 Apr 23 00:49:25.147: INFO: Exec stderr: "" Apr 23 00:49:25.147: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.147: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.174590 7 log.go:172] (0xc002e56b00) (0xc00255e960) Create stream I0423 00:49:25.174623 7 log.go:172] (0xc002e56b00) (0xc00255e960) Stream added, broadcasting: 1 I0423 00:49:25.176524 7 log.go:172] (0xc002e56b00) Reply frame received for 1 I0423 00:49:25.176564 7 log.go:172] (0xc002e56b00) (0xc00291e280) Create stream I0423 00:49:25.176591 7 log.go:172] (0xc002e56b00) (0xc00291e280) Stream added, broadcasting: 3 I0423 00:49:25.177908 7 log.go:172] (0xc002e56b00) Reply frame received for 3 I0423 00:49:25.177948 7 log.go:172] (0xc002e56b00) (0xc001782640) Create stream I0423 00:49:25.177965 7 log.go:172] (0xc002e56b00) (0xc001782640) Stream added, broadcasting: 5 I0423 00:49:25.178899 7 log.go:172] (0xc002e56b00) Reply frame received for 5 I0423 00:49:25.252235 7 log.go:172] (0xc002e56b00) Data frame received for 5 I0423 00:49:25.252283 7 log.go:172] (0xc001782640) (5) Data frame handling I0423 00:49:25.252317 7 log.go:172] (0xc002e56b00) Data frame received for 3 I0423 00:49:25.252336 7 log.go:172] (0xc00291e280) (3) Data frame handling I0423 00:49:25.252370 7 log.go:172] (0xc00291e280) (3) Data frame sent I0423 00:49:25.252394 7 log.go:172] (0xc002e56b00) Data frame received for 3 I0423 00:49:25.252419 7 log.go:172] (0xc00291e280) (3) Data frame handling I0423 00:49:25.253827 7 log.go:172] (0xc002e56b00) Data frame received for 1 I0423 00:49:25.253852 7 log.go:172] (0xc00255e960) (1) Data frame handling I0423 00:49:25.253882 7 log.go:172] (0xc00255e960) (1) Data frame sent I0423 00:49:25.253905 7 log.go:172] (0xc002e56b00) (0xc00255e960) Stream removed, broadcasting: 1 I0423 00:49:25.253927 7 log.go:172] (0xc002e56b00) Go away received I0423 00:49:25.254064 7 log.go:172] (0xc002e56b00) (0xc00255e960) Stream removed, broadcasting: 1 I0423 00:49:25.254099 7 log.go:172] (0xc002e56b00) (0xc00291e280) Stream removed, broadcasting: 3 I0423 00:49:25.254136 7 log.go:172] (0xc002e56b00) (0xc001782640) Stream removed, broadcasting: 5 Apr 23 00:49:25.254: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 23 00:49:25.254: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.254: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.288468 7 log.go:172] (0xc0023cd970) (0xc00291e8c0) Create stream I0423 00:49:25.288500 7 log.go:172] (0xc0023cd970) (0xc00291e8c0) Stream added, broadcasting: 1 I0423 00:49:25.291471 7 log.go:172] (0xc0023cd970) Reply frame received for 1 I0423 00:49:25.291510 7 log.go:172] (0xc0023cd970) (0xc00255ea00) Create stream I0423 00:49:25.291524 7 log.go:172] (0xc0023cd970) (0xc00255ea00) Stream added, broadcasting: 3 I0423 00:49:25.292645 7 log.go:172] (0xc0023cd970) Reply frame received for 3 I0423 00:49:25.292694 7 log.go:172] (0xc0023cd970) (0xc001eca280) Create stream I0423 00:49:25.292710 7 log.go:172] (0xc0023cd970) (0xc001eca280) Stream added, broadcasting: 5 I0423 00:49:25.294049 7 log.go:172] (0xc0023cd970) Reply frame received for 5 I0423 00:49:25.362242 7 log.go:172] (0xc0023cd970) Data frame received for 5 I0423 00:49:25.362282 7 log.go:172] (0xc001eca280) (5) Data frame handling I0423 00:49:25.362307 7 log.go:172] (0xc0023cd970) Data frame received for 3 I0423 00:49:25.362321 7 log.go:172] (0xc00255ea00) (3) Data frame handling I0423 00:49:25.362335 7 log.go:172] (0xc00255ea00) (3) Data frame sent I0423 00:49:25.362346 7 log.go:172] (0xc0023cd970) Data frame received for 3 I0423 00:49:25.362360 7 log.go:172] (0xc00255ea00) (3) Data frame handling I0423 00:49:25.363321 7 log.go:172] (0xc0023cd970) Data frame received for 1 I0423 00:49:25.363340 7 log.go:172] (0xc00291e8c0) (1) Data frame handling I0423 00:49:25.363359 7 log.go:172] (0xc00291e8c0) (1) Data frame sent I0423 00:49:25.363394 7 log.go:172] (0xc0023cd970) (0xc00291e8c0) Stream removed, broadcasting: 1 I0423 00:49:25.363417 7 log.go:172] (0xc0023cd970) Go away received I0423 00:49:25.363523 7 log.go:172] (0xc0023cd970) (0xc00291e8c0) Stream removed, broadcasting: 1 I0423 00:49:25.363537 7 log.go:172] (0xc0023cd970) (0xc00255ea00) Stream removed, broadcasting: 3 I0423 00:49:25.363543 7 log.go:172] (0xc0023cd970) (0xc001eca280) Stream removed, broadcasting: 5 Apr 23 00:49:25.363: INFO: Exec stderr: "" Apr 23 00:49:25.363: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.363: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.391920 7 log.go:172] (0xc00173c370) (0xc0012288c0) Create stream I0423 00:49:25.391948 7 log.go:172] (0xc00173c370) (0xc0012288c0) Stream added, broadcasting: 1 I0423 00:49:25.394274 7 log.go:172] (0xc00173c370) Reply frame received for 1 I0423 00:49:25.394318 7 log.go:172] (0xc00173c370) (0xc001228a00) Create stream I0423 00:49:25.394334 7 log.go:172] (0xc00173c370) (0xc001228a00) Stream added, broadcasting: 3 I0423 00:49:25.395355 7 log.go:172] (0xc00173c370) Reply frame received for 3 I0423 00:49:25.395384 7 log.go:172] (0xc00173c370) (0xc001228d20) Create stream I0423 00:49:25.395394 7 log.go:172] (0xc00173c370) (0xc001228d20) Stream added, broadcasting: 5 I0423 00:49:25.396310 7 log.go:172] (0xc00173c370) Reply frame received for 5 I0423 00:49:25.468457 7 log.go:172] (0xc00173c370) Data frame received for 5 I0423 00:49:25.468505 7 log.go:172] (0xc001228d20) (5) Data frame handling I0423 00:49:25.468547 7 log.go:172] (0xc00173c370) Data frame received for 3 I0423 00:49:25.468565 7 log.go:172] (0xc001228a00) (3) Data frame handling I0423 00:49:25.468584 7 log.go:172] (0xc001228a00) (3) Data frame sent I0423 00:49:25.468598 7 log.go:172] (0xc00173c370) Data frame received for 3 I0423 00:49:25.468616 7 log.go:172] (0xc001228a00) (3) Data frame handling I0423 00:49:25.470037 7 log.go:172] (0xc00173c370) Data frame received for 1 I0423 00:49:25.470064 7 log.go:172] (0xc0012288c0) (1) Data frame handling I0423 00:49:25.470090 7 log.go:172] (0xc0012288c0) (1) Data frame sent I0423 00:49:25.470109 7 log.go:172] (0xc00173c370) (0xc0012288c0) Stream removed, broadcasting: 1 I0423 00:49:25.470134 7 log.go:172] (0xc00173c370) Go away received I0423 00:49:25.470281 7 log.go:172] (0xc00173c370) (0xc0012288c0) Stream removed, broadcasting: 1 I0423 00:49:25.470304 7 log.go:172] (0xc00173c370) (0xc001228a00) Stream removed, broadcasting: 3 I0423 00:49:25.470318 7 log.go:172] (0xc00173c370) (0xc001228d20) Stream removed, broadcasting: 5 Apr 23 00:49:25.470: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 23 00:49:25.470: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.470: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.510087 7 log.go:172] (0xc00277ab00) (0xc001782c80) Create stream I0423 00:49:25.510112 7 log.go:172] (0xc00277ab00) (0xc001782c80) Stream added, broadcasting: 1 I0423 00:49:25.512079 7 log.go:172] (0xc00277ab00) Reply frame received for 1 I0423 00:49:25.512129 7 log.go:172] (0xc00277ab00) (0xc00255eaa0) Create stream I0423 00:49:25.512147 7 log.go:172] (0xc00277ab00) (0xc00255eaa0) Stream added, broadcasting: 3 I0423 00:49:25.513082 7 log.go:172] (0xc00277ab00) Reply frame received for 3 I0423 00:49:25.513306 7 log.go:172] (0xc00277ab00) (0xc00291eb40) Create stream I0423 00:49:25.513334 7 log.go:172] (0xc00277ab00) (0xc00291eb40) Stream added, broadcasting: 5 I0423 00:49:25.514282 7 log.go:172] (0xc00277ab00) Reply frame received for 5 I0423 00:49:25.584428 7 log.go:172] (0xc00277ab00) Data frame received for 5 I0423 00:49:25.584479 7 log.go:172] (0xc00291eb40) (5) Data frame handling I0423 00:49:25.584519 7 log.go:172] (0xc00277ab00) Data frame received for 3 I0423 00:49:25.584534 7 log.go:172] (0xc00255eaa0) (3) Data frame handling I0423 00:49:25.584553 7 log.go:172] (0xc00255eaa0) (3) Data frame sent I0423 00:49:25.584567 7 log.go:172] (0xc00277ab00) Data frame received for 3 I0423 00:49:25.584589 7 log.go:172] (0xc00255eaa0) (3) Data frame handling I0423 00:49:25.586443 7 log.go:172] (0xc00277ab00) Data frame received for 1 I0423 00:49:25.586493 7 log.go:172] (0xc001782c80) (1) Data frame handling I0423 00:49:25.586552 7 log.go:172] (0xc001782c80) (1) Data frame sent I0423 00:49:25.586598 7 log.go:172] (0xc00277ab00) (0xc001782c80) Stream removed, broadcasting: 1 I0423 00:49:25.586633 7 log.go:172] (0xc00277ab00) Go away received I0423 00:49:25.586767 7 log.go:172] (0xc00277ab00) (0xc001782c80) Stream removed, broadcasting: 1 I0423 00:49:25.586811 7 log.go:172] (0xc00277ab00) (0xc00255eaa0) Stream removed, broadcasting: 3 I0423 00:49:25.586845 7 log.go:172] (0xc00277ab00) (0xc00291eb40) Stream removed, broadcasting: 5 Apr 23 00:49:25.586: INFO: Exec stderr: "" Apr 23 00:49:25.586: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.586: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.623000 7 log.go:172] (0xc002e57130) (0xc00255ef00) Create stream I0423 00:49:25.623033 7 log.go:172] (0xc002e57130) (0xc00255ef00) Stream added, broadcasting: 1 I0423 00:49:25.624888 7 log.go:172] (0xc002e57130) Reply frame received for 1 I0423 00:49:25.624940 7 log.go:172] (0xc002e57130) (0xc001782fa0) Create stream I0423 00:49:25.624960 7 log.go:172] (0xc002e57130) (0xc001782fa0) Stream added, broadcasting: 3 I0423 00:49:25.626334 7 log.go:172] (0xc002e57130) Reply frame received for 3 I0423 00:49:25.626386 7 log.go:172] (0xc002e57130) (0xc001783360) Create stream I0423 00:49:25.626401 7 log.go:172] (0xc002e57130) (0xc001783360) Stream added, broadcasting: 5 I0423 00:49:25.627422 7 log.go:172] (0xc002e57130) Reply frame received for 5 I0423 00:49:25.714208 7 log.go:172] (0xc002e57130) Data frame received for 3 I0423 00:49:25.714276 7 log.go:172] (0xc001782fa0) (3) Data frame handling I0423 00:49:25.714288 7 log.go:172] (0xc001782fa0) (3) Data frame sent I0423 00:49:25.714293 7 log.go:172] (0xc002e57130) Data frame received for 3 I0423 00:49:25.714297 7 log.go:172] (0xc001782fa0) (3) Data frame handling I0423 00:49:25.714324 7 log.go:172] (0xc002e57130) Data frame received for 5 I0423 00:49:25.714332 7 log.go:172] (0xc001783360) (5) Data frame handling I0423 00:49:25.715522 7 log.go:172] (0xc002e57130) Data frame received for 1 I0423 00:49:25.715534 7 log.go:172] (0xc00255ef00) (1) Data frame handling I0423 00:49:25.715545 7 log.go:172] (0xc00255ef00) (1) Data frame sent I0423 00:49:25.715555 7 log.go:172] (0xc002e57130) (0xc00255ef00) Stream removed, broadcasting: 1 I0423 00:49:25.715607 7 log.go:172] (0xc002e57130) (0xc00255ef00) Stream removed, broadcasting: 1 I0423 00:49:25.715617 7 log.go:172] (0xc002e57130) (0xc001782fa0) Stream removed, broadcasting: 3 I0423 00:49:25.715624 7 log.go:172] (0xc002e57130) (0xc001783360) Stream removed, broadcasting: 5 Apr 23 00:49:25.715: INFO: Exec stderr: "" Apr 23 00:49:25.715: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.715: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.715712 7 log.go:172] (0xc002e57130) Go away received I0423 00:49:25.736960 7 log.go:172] (0xc002d50790) (0xc001eca5a0) Create stream I0423 00:49:25.736986 7 log.go:172] (0xc002d50790) (0xc001eca5a0) Stream added, broadcasting: 1 I0423 00:49:25.738811 7 log.go:172] (0xc002d50790) Reply frame received for 1 I0423 00:49:25.738926 7 log.go:172] (0xc002d50790) (0xc00255efa0) Create stream I0423 00:49:25.738942 7 log.go:172] (0xc002d50790) (0xc00255efa0) Stream added, broadcasting: 3 I0423 00:49:25.739736 7 log.go:172] (0xc002d50790) Reply frame received for 3 I0423 00:49:25.739763 7 log.go:172] (0xc002d50790) (0xc00291ebe0) Create stream I0423 00:49:25.739774 7 log.go:172] (0xc002d50790) (0xc00291ebe0) Stream added, broadcasting: 5 I0423 00:49:25.740516 7 log.go:172] (0xc002d50790) Reply frame received for 5 I0423 00:49:25.798343 7 log.go:172] (0xc002d50790) Data frame received for 5 I0423 00:49:25.798392 7 log.go:172] (0xc00291ebe0) (5) Data frame handling I0423 00:49:25.798422 7 log.go:172] (0xc002d50790) Data frame received for 3 I0423 00:49:25.798434 7 log.go:172] (0xc00255efa0) (3) Data frame handling I0423 00:49:25.798445 7 log.go:172] (0xc00255efa0) (3) Data frame sent I0423 00:49:25.798452 7 log.go:172] (0xc002d50790) Data frame received for 3 I0423 00:49:25.798457 7 log.go:172] (0xc00255efa0) (3) Data frame handling I0423 00:49:25.799899 7 log.go:172] (0xc002d50790) Data frame received for 1 I0423 00:49:25.799935 7 log.go:172] (0xc001eca5a0) (1) Data frame handling I0423 00:49:25.799958 7 log.go:172] (0xc001eca5a0) (1) Data frame sent I0423 00:49:25.799983 7 log.go:172] (0xc002d50790) (0xc001eca5a0) Stream removed, broadcasting: 1 I0423 00:49:25.800009 7 log.go:172] (0xc002d50790) Go away received I0423 00:49:25.800123 7 log.go:172] (0xc002d50790) (0xc001eca5a0) Stream removed, broadcasting: 1 I0423 00:49:25.800155 7 log.go:172] (0xc002d50790) (0xc00255efa0) Stream removed, broadcasting: 3 I0423 00:49:25.800165 7 log.go:172] (0xc002d50790) (0xc00291ebe0) Stream removed, broadcasting: 5 Apr 23 00:49:25.800: INFO: Exec stderr: "" Apr 23 00:49:25.800: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6526 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:49:25.800: INFO: >>> kubeConfig: /root/.kube/config I0423 00:49:25.824249 7 log.go:172] (0xc002d50dc0) (0xc001eca960) Create stream I0423 00:49:25.824280 7 log.go:172] (0xc002d50dc0) (0xc001eca960) Stream added, broadcasting: 1 I0423 00:49:25.826399 7 log.go:172] (0xc002d50dc0) Reply frame received for 1 I0423 00:49:25.826431 7 log.go:172] (0xc002d50dc0) (0xc0017834a0) Create stream I0423 00:49:25.826441 7 log.go:172] (0xc002d50dc0) (0xc0017834a0) Stream added, broadcasting: 3 I0423 00:49:25.827389 7 log.go:172] (0xc002d50dc0) Reply frame received for 3 I0423 00:49:25.827411 7 log.go:172] (0xc002d50dc0) (0xc00291ec80) Create stream I0423 00:49:25.827418 7 log.go:172] (0xc002d50dc0) (0xc00291ec80) Stream added, broadcasting: 5 I0423 00:49:25.828454 7 log.go:172] (0xc002d50dc0) Reply frame received for 5 I0423 00:49:25.908165 7 log.go:172] (0xc002d50dc0) Data frame received for 3 I0423 00:49:25.908202 7 log.go:172] (0xc0017834a0) (3) Data frame handling I0423 00:49:25.908229 7 log.go:172] (0xc0017834a0) (3) Data frame sent I0423 00:49:25.908252 7 log.go:172] (0xc002d50dc0) Data frame received for 3 I0423 00:49:25.908263 7 log.go:172] (0xc0017834a0) (3) Data frame handling I0423 00:49:25.908309 7 log.go:172] (0xc002d50dc0) Data frame received for 5 I0423 00:49:25.908356 7 log.go:172] (0xc00291ec80) (5) Data frame handling I0423 00:49:25.910317 7 log.go:172] (0xc002d50dc0) Data frame received for 1 I0423 00:49:25.910352 7 log.go:172] (0xc001eca960) (1) Data frame handling I0423 00:49:25.910403 7 log.go:172] (0xc001eca960) (1) Data frame sent I0423 00:49:25.910435 7 log.go:172] (0xc002d50dc0) (0xc001eca960) Stream removed, broadcasting: 1 I0423 00:49:25.910460 7 log.go:172] (0xc002d50dc0) Go away received I0423 00:49:25.910597 7 log.go:172] (0xc002d50dc0) (0xc001eca960) Stream removed, broadcasting: 1 I0423 00:49:25.910644 7 log.go:172] (0xc002d50dc0) (0xc0017834a0) Stream removed, broadcasting: 3 I0423 00:49:25.910671 7 log.go:172] (0xc002d50dc0) (0xc00291ec80) Stream removed, broadcasting: 5 Apr 23 00:49:25.910: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:49:25.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6526" for this suite. • [SLOW TEST:11.207 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4099,"failed":0} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:49:25.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-79266c71-14fd-4bbf-9cd4-5ac7f861be67 in namespace container-probe-91 Apr 23 00:49:32.040: INFO: Started pod busybox-79266c71-14fd-4bbf-9cd4-5ac7f861be67 in namespace container-probe-91 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 00:49:32.043: INFO: Initial restart count of pod busybox-79266c71-14fd-4bbf-9cd4-5ac7f861be67 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:53:33.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-91" for this suite. • [SLOW TEST:247.842 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4099,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:53:33.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-607dd1bb-04f5-45e2-b2a0-5754f3aec85c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:53:33.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6376" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":245,"skipped":4106,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:53:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 23 00:53:33.894: INFO: Created pod &Pod{ObjectMeta:{dns-5099 dns-5099 /api/v1/namespaces/dns-5099/pods/dns-5099 3b841f50-b2e8-4142-901d-7e1a72fd7db0 10270517 0 2020-04-23 00:53:33 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-28m2c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-28m2c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-28m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 23 00:53:33.898: INFO: The status of Pod dns-5099 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:53:35.921: INFO: The status of Pod dns-5099 is Pending, waiting for it to be Running (with Ready = true) Apr 23 00:53:37.902: INFO: The status of Pod dns-5099 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 23 00:53:37.902: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5099 PodName:dns-5099 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:53:37.902: INFO: >>> kubeConfig: /root/.kube/config I0423 00:53:37.934657 7 log.go:172] (0xc002e562c0) (0xc0003ad4a0) Create stream I0423 00:53:37.934692 7 log.go:172] (0xc002e562c0) (0xc0003ad4a0) Stream added, broadcasting: 1 I0423 00:53:37.936698 7 log.go:172] (0xc002e562c0) Reply frame received for 1 I0423 00:53:37.936736 7 log.go:172] (0xc002e562c0) (0xc0003ad540) Create stream I0423 00:53:37.936749 7 log.go:172] (0xc002e562c0) (0xc0003ad540) Stream added, broadcasting: 3 I0423 00:53:37.937768 7 log.go:172] (0xc002e562c0) Reply frame received for 3 I0423 00:53:37.937800 7 log.go:172] (0xc002e562c0) (0xc0003ad860) Create stream I0423 00:53:37.937812 7 log.go:172] (0xc002e562c0) (0xc0003ad860) Stream added, broadcasting: 5 I0423 00:53:37.938659 7 log.go:172] (0xc002e562c0) Reply frame received for 5 I0423 00:53:38.003558 7 log.go:172] (0xc002e562c0) Data frame received for 3 I0423 00:53:38.003594 7 log.go:172] (0xc0003ad540) (3) Data frame handling I0423 00:53:38.003618 7 log.go:172] (0xc0003ad540) (3) Data frame sent I0423 00:53:38.004673 7 log.go:172] (0xc002e562c0) Data frame received for 3 I0423 00:53:38.004724 7 log.go:172] (0xc0003ad540) (3) Data frame handling I0423 00:53:38.004776 7 log.go:172] (0xc002e562c0) Data frame received for 5 I0423 00:53:38.004801 7 log.go:172] (0xc0003ad860) (5) Data frame handling I0423 00:53:38.006206 7 log.go:172] (0xc002e562c0) Data frame received for 1 I0423 00:53:38.006224 7 log.go:172] (0xc0003ad4a0) (1) Data frame handling I0423 00:53:38.006236 7 log.go:172] (0xc0003ad4a0) (1) Data frame sent I0423 00:53:38.006291 7 log.go:172] (0xc002e562c0) (0xc0003ad4a0) Stream removed, broadcasting: 1 I0423 00:53:38.006365 7 log.go:172] (0xc002e562c0) Go away received I0423 00:53:38.006408 7 log.go:172] (0xc002e562c0) (0xc0003ad4a0) Stream removed, broadcasting: 1 I0423 00:53:38.006435 7 log.go:172] (0xc002e562c0) (0xc0003ad540) Stream removed, broadcasting: 3 I0423 00:53:38.006442 7 log.go:172] (0xc002e562c0) (0xc0003ad860) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 23 00:53:38.006: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5099 PodName:dns-5099 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 00:53:38.006: INFO: >>> kubeConfig: /root/.kube/config I0423 00:53:38.040551 7 log.go:172] (0xc002d50160) (0xc00118fcc0) Create stream I0423 00:53:38.040587 7 log.go:172] (0xc002d50160) (0xc00118fcc0) Stream added, broadcasting: 1 I0423 00:53:38.042697 7 log.go:172] (0xc002d50160) Reply frame received for 1 I0423 00:53:38.042739 7 log.go:172] (0xc002d50160) (0xc00118fe00) Create stream I0423 00:53:38.042752 7 log.go:172] (0xc002d50160) (0xc00118fe00) Stream added, broadcasting: 3 I0423 00:53:38.043706 7 log.go:172] (0xc002d50160) Reply frame received for 3 I0423 00:53:38.043762 7 log.go:172] (0xc002d50160) (0xc000197b80) Create stream I0423 00:53:38.043787 7 log.go:172] (0xc002d50160) (0xc000197b80) Stream added, broadcasting: 5 I0423 00:53:38.044755 7 log.go:172] (0xc002d50160) Reply frame received for 5 I0423 00:53:38.114834 7 log.go:172] (0xc002d50160) Data frame received for 3 I0423 00:53:38.114878 7 log.go:172] (0xc00118fe00) (3) Data frame handling I0423 00:53:38.114920 7 log.go:172] (0xc00118fe00) (3) Data frame sent I0423 00:53:38.115631 7 log.go:172] (0xc002d50160) Data frame received for 5 I0423 00:53:38.115663 7 log.go:172] (0xc000197b80) (5) Data frame handling I0423 00:53:38.115687 7 log.go:172] (0xc002d50160) Data frame received for 3 I0423 00:53:38.115700 7 log.go:172] (0xc00118fe00) (3) Data frame handling I0423 00:53:38.117394 7 log.go:172] (0xc002d50160) Data frame received for 1 I0423 00:53:38.117412 7 log.go:172] (0xc00118fcc0) (1) Data frame handling I0423 00:53:38.117425 7 log.go:172] (0xc00118fcc0) (1) Data frame sent I0423 00:53:38.117446 7 log.go:172] (0xc002d50160) (0xc00118fcc0) Stream removed, broadcasting: 1 I0423 00:53:38.117544 7 log.go:172] (0xc002d50160) (0xc00118fcc0) Stream removed, broadcasting: 1 I0423 00:53:38.117564 7 log.go:172] (0xc002d50160) (0xc00118fe00) Stream removed, broadcasting: 3 I0423 00:53:38.117694 7 log.go:172] (0xc002d50160) (0xc000197b80) Stream removed, broadcasting: 5 Apr 23 00:53:38.117: INFO: Deleting pod dns-5099... I0423 00:53:38.117807 7 log.go:172] (0xc002d50160) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:53:38.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5099" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":246,"skipped":4122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:53:38.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:53:42.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8402" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4149,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:53:42.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:53:43.261: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:53:45.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200023, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200023, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:53:48.407: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:00.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5020" for this suite. STEP: Destroying namespace "webhook-5020-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.150 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":248,"skipped":4151,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:00.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 23 00:54:00.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5997' Apr 23 00:54:03.708: INFO: stderr: "" Apr 23 00:54:03.708: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 23 00:54:08.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5997 -o json' Apr 23 00:54:08.864: INFO: stderr: "" Apr 23 00:54:08.864: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-23T00:54:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5997\",\n \"resourceVersion\": \"10270751\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5997/pods/e2e-test-httpd-pod\",\n \"uid\": \"430d5d79-f349-4c21-b487-cf70dc98769a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zcqpp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zcqpp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zcqpp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T00:54:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T00:54:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T00:54:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T00:54:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://de778d88307c24f663c3db97e9b780c729ddf3e7a220bd5abf329df91ba7c99d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-23T00:54:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.27\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.27\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-23T00:54:03Z\"\n }\n}\n" STEP: replace the image in the pod Apr 23 00:54:08.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5997' Apr 23 00:54:09.178: INFO: stderr: "" Apr 23 00:54:09.178: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 23 00:54:09.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5997' Apr 23 00:54:22.798: INFO: stderr: "" Apr 23 00:54:22.798: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:22.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5997" for this suite. • [SLOW TEST:22.137 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":249,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:22.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 23 00:54:22.879: INFO: Waiting up to 5m0s for pod "pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1" in namespace "emptydir-5413" to be "Succeeded or Failed" Apr 23 00:54:22.882: INFO: Pod "pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.413578ms Apr 23 00:54:24.885: INFO: Pod "pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0065058s Apr 23 00:54:26.889: INFO: Pod "pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010606836s STEP: Saw pod success Apr 23 00:54:26.889: INFO: Pod "pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1" satisfied condition "Succeeded or Failed" Apr 23 00:54:26.892: INFO: Trying to get logs from node latest-worker2 pod pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1 container test-container: STEP: delete the pod Apr 23 00:54:26.926: INFO: Waiting for pod pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1 to disappear Apr 23 00:54:26.930: INFO: Pod pod-439c1184-1979-46b9-9ae1-c55b2d53a5a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:26.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5413" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4196,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:26.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:54:27.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:54:29.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200067, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200067, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200067, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200067, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:54:32.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:54:32.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:34.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8453" for this suite. STEP: Destroying namespace "webhook-8453-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.174 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":251,"skipped":4210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:34.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:54:34.865: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:54:36.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:54:38.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200074, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:54:41.935: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 23 00:54:41.955: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:42.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-642" for this suite. STEP: Destroying namespace "webhook-642-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.969 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":252,"skipped":4240,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:42.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:54:42.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 23 00:54:42.408: INFO: stderr: "" Apr 23 00:54:42.409: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:42.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9020" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":253,"skipped":4242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:42.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:54:42.586: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3343 I0423 00:54:42.608770 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3343, replica count: 1 I0423 00:54:43.659220 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:54:44.659475 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:54:45.659704 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 00:54:45.805: INFO: Created: latency-svc-j82tp Apr 23 00:54:45.833: INFO: Got endpoints: latency-svc-j82tp [73.92196ms] Apr 23 00:54:45.866: INFO: Created: latency-svc-84px2 Apr 23 00:54:45.882: INFO: Got endpoints: latency-svc-84px2 [48.858601ms] Apr 23 00:54:45.903: INFO: Created: latency-svc-4nfcc Apr 23 00:54:45.913: INFO: Got endpoints: latency-svc-4nfcc [79.357649ms] Apr 23 00:54:45.926: INFO: Created: latency-svc-tl5wj Apr 23 00:54:46.001: INFO: Got endpoints: latency-svc-tl5wj [167.309865ms] Apr 23 00:54:46.033: INFO: Created: latency-svc-6lbr7 Apr 23 00:54:46.045: INFO: Got endpoints: latency-svc-6lbr7 [212.030257ms] Apr 23 00:54:46.064: INFO: Created: latency-svc-5t2nh Apr 23 00:54:46.088: INFO: Got endpoints: latency-svc-5t2nh [254.204893ms] Apr 23 00:54:46.120: INFO: Created: latency-svc-wnwd9 Apr 23 00:54:46.155: INFO: Got endpoints: latency-svc-wnwd9 [321.150246ms] Apr 23 00:54:46.155: INFO: Created: latency-svc-kkdqn Apr 23 00:54:46.173: INFO: Got endpoints: latency-svc-kkdqn [339.143833ms] Apr 23 00:54:46.195: INFO: Created: latency-svc-pdnbd Apr 23 00:54:46.207: INFO: Got endpoints: latency-svc-pdnbd [373.453527ms] Apr 23 00:54:46.252: INFO: Created: latency-svc-wbmnq Apr 23 00:54:46.291: INFO: Got endpoints: latency-svc-wbmnq [457.150098ms] Apr 23 00:54:46.292: INFO: Created: latency-svc-78hq2 Apr 23 00:54:46.302: INFO: Got endpoints: latency-svc-78hq2 [467.995353ms] Apr 23 00:54:46.329: INFO: Created: latency-svc-gn9hr Apr 23 00:54:46.378: INFO: Got endpoints: latency-svc-gn9hr [543.910431ms] Apr 23 00:54:46.387: INFO: Created: latency-svc-8bc8d Apr 23 00:54:46.404: INFO: Got endpoints: latency-svc-8bc8d [570.06854ms] Apr 23 00:54:46.423: INFO: Created: latency-svc-jnn8b Apr 23 00:54:46.457: INFO: Got endpoints: latency-svc-jnn8b [623.722749ms] Apr 23 00:54:46.515: INFO: Created: latency-svc-9srhd Apr 23 00:54:46.523: INFO: Got endpoints: latency-svc-9srhd [689.488961ms] Apr 23 00:54:46.545: INFO: Created: latency-svc-q6xm2 Apr 23 00:54:46.565: INFO: Got endpoints: latency-svc-q6xm2 [731.759376ms] Apr 23 00:54:46.605: INFO: Created: latency-svc-twkt9 Apr 23 00:54:46.614: INFO: Got endpoints: latency-svc-twkt9 [731.841512ms] Apr 23 00:54:46.653: INFO: Created: latency-svc-f7lf4 Apr 23 00:54:46.675: INFO: Got endpoints: latency-svc-f7lf4 [762.516777ms] Apr 23 00:54:46.676: INFO: Created: latency-svc-6vh74 Apr 23 00:54:46.692: INFO: Got endpoints: latency-svc-6vh74 [690.909195ms] Apr 23 00:54:46.711: INFO: Created: latency-svc-wrz5s Apr 23 00:54:46.728: INFO: Got endpoints: latency-svc-wrz5s [682.629729ms] Apr 23 00:54:46.749: INFO: Created: latency-svc-bcgg9 Apr 23 00:54:46.773: INFO: Got endpoints: latency-svc-bcgg9 [684.764237ms] Apr 23 00:54:46.784: INFO: Created: latency-svc-t6g26 Apr 23 00:54:46.794: INFO: Got endpoints: latency-svc-t6g26 [639.479679ms] Apr 23 00:54:46.809: INFO: Created: latency-svc-wcdhr Apr 23 00:54:46.818: INFO: Got endpoints: latency-svc-wcdhr [645.087324ms] Apr 23 00:54:46.837: INFO: Created: latency-svc-25zrr Apr 23 00:54:46.847: INFO: Got endpoints: latency-svc-25zrr [639.793735ms] Apr 23 00:54:46.860: INFO: Created: latency-svc-n7nz7 Apr 23 00:54:46.871: INFO: Got endpoints: latency-svc-n7nz7 [579.77028ms] Apr 23 00:54:46.916: INFO: Created: latency-svc-w4g5x Apr 23 00:54:46.925: INFO: Got endpoints: latency-svc-w4g5x [623.437853ms] Apr 23 00:54:46.946: INFO: Created: latency-svc-6zsq9 Apr 23 00:54:46.961: INFO: Got endpoints: latency-svc-6zsq9 [583.130029ms] Apr 23 00:54:46.988: INFO: Created: latency-svc-c2wg8 Apr 23 00:54:47.009: INFO: Got endpoints: latency-svc-c2wg8 [605.33607ms] Apr 23 00:54:47.059: INFO: Created: latency-svc-dthzg Apr 23 00:54:47.081: INFO: Got endpoints: latency-svc-dthzg [623.585781ms] Apr 23 00:54:47.101: INFO: Created: latency-svc-csnrm Apr 23 00:54:47.110: INFO: Got endpoints: latency-svc-csnrm [587.177246ms] Apr 23 00:54:47.132: INFO: Created: latency-svc-q8fdj Apr 23 00:54:47.162: INFO: Got endpoints: latency-svc-q8fdj [596.35963ms] Apr 23 00:54:47.193: INFO: Created: latency-svc-sq94k Apr 23 00:54:47.201: INFO: Got endpoints: latency-svc-sq94k [587.190466ms] Apr 23 00:54:47.222: INFO: Created: latency-svc-tq8kb Apr 23 00:54:47.231: INFO: Got endpoints: latency-svc-tq8kb [555.632381ms] Apr 23 00:54:47.256: INFO: Created: latency-svc-8z4fz Apr 23 00:54:47.306: INFO: Got endpoints: latency-svc-8z4fz [613.981303ms] Apr 23 00:54:47.329: INFO: Created: latency-svc-w56mj Apr 23 00:54:47.351: INFO: Got endpoints: latency-svc-w56mj [622.906703ms] Apr 23 00:54:47.379: INFO: Created: latency-svc-ng98l Apr 23 00:54:47.393: INFO: Got endpoints: latency-svc-ng98l [620.602131ms] Apr 23 00:54:47.449: INFO: Created: latency-svc-b8dmb Apr 23 00:54:47.472: INFO: Created: latency-svc-hgxcj Apr 23 00:54:47.472: INFO: Got endpoints: latency-svc-b8dmb [677.974575ms] Apr 23 00:54:47.488: INFO: Got endpoints: latency-svc-hgxcj [670.198407ms] Apr 23 00:54:47.522: INFO: Created: latency-svc-286pd Apr 23 00:54:47.530: INFO: Got endpoints: latency-svc-286pd [682.616736ms] Apr 23 00:54:47.575: INFO: Created: latency-svc-gtsnl Apr 23 00:54:47.593: INFO: Got endpoints: latency-svc-gtsnl [721.847305ms] Apr 23 00:54:47.649: INFO: Created: latency-svc-k9p2l Apr 23 00:54:47.662: INFO: Got endpoints: latency-svc-k9p2l [736.507034ms] Apr 23 00:54:47.700: INFO: Created: latency-svc-86565 Apr 23 00:54:47.715: INFO: Got endpoints: latency-svc-86565 [754.718893ms] Apr 23 00:54:47.737: INFO: Created: latency-svc-gcnxm Apr 23 00:54:47.745: INFO: Got endpoints: latency-svc-gcnxm [736.45342ms] Apr 23 00:54:47.773: INFO: Created: latency-svc-2mzpx Apr 23 00:54:47.857: INFO: Got endpoints: latency-svc-2mzpx [775.452261ms] Apr 23 00:54:47.885: INFO: Created: latency-svc-m7q8j Apr 23 00:54:47.896: INFO: Got endpoints: latency-svc-m7q8j [785.826799ms] Apr 23 00:54:47.919: INFO: Created: latency-svc-qcjgz Apr 23 00:54:47.932: INFO: Got endpoints: latency-svc-qcjgz [770.536969ms] Apr 23 00:54:47.953: INFO: Created: latency-svc-jtqq9 Apr 23 00:54:47.982: INFO: Got endpoints: latency-svc-jtqq9 [780.460625ms] Apr 23 00:54:48.025: INFO: Created: latency-svc-fdrgs Apr 23 00:54:48.040: INFO: Got endpoints: latency-svc-fdrgs [809.163691ms] Apr 23 00:54:48.068: INFO: Created: latency-svc-6xzjl Apr 23 00:54:48.103: INFO: Got endpoints: latency-svc-6xzjl [796.668671ms] Apr 23 00:54:48.122: INFO: Created: latency-svc-bsjk6 Apr 23 00:54:48.136: INFO: Got endpoints: latency-svc-bsjk6 [785.045137ms] Apr 23 00:54:48.159: INFO: Created: latency-svc-gnmjp Apr 23 00:54:48.171: INFO: Got endpoints: latency-svc-gnmjp [777.150594ms] Apr 23 00:54:48.198: INFO: Created: latency-svc-rg7cg Apr 23 00:54:48.228: INFO: Got endpoints: latency-svc-rg7cg [755.373798ms] Apr 23 00:54:48.253: INFO: Created: latency-svc-n46h7 Apr 23 00:54:48.261: INFO: Got endpoints: latency-svc-n46h7 [772.692622ms] Apr 23 00:54:48.283: INFO: Created: latency-svc-9l6xj Apr 23 00:54:48.291: INFO: Got endpoints: latency-svc-9l6xj [761.181505ms] Apr 23 00:54:48.321: INFO: Created: latency-svc-qffw7 Apr 23 00:54:48.395: INFO: Got endpoints: latency-svc-qffw7 [802.772635ms] Apr 23 00:54:48.416: INFO: Created: latency-svc-4hl7c Apr 23 00:54:48.441: INFO: Got endpoints: latency-svc-4hl7c [778.914003ms] Apr 23 00:54:48.474: INFO: Created: latency-svc-s69gc Apr 23 00:54:48.488: INFO: Got endpoints: latency-svc-s69gc [772.936998ms] Apr 23 00:54:48.540: INFO: Created: latency-svc-ssf64 Apr 23 00:54:48.555: INFO: Got endpoints: latency-svc-ssf64 [809.963873ms] Apr 23 00:54:48.584: INFO: Created: latency-svc-jmmnh Apr 23 00:54:48.608: INFO: Got endpoints: latency-svc-jmmnh [751.439345ms] Apr 23 00:54:48.659: INFO: Created: latency-svc-n2f7l Apr 23 00:54:48.678: INFO: Created: latency-svc-466jm Apr 23 00:54:48.678: INFO: Got endpoints: latency-svc-n2f7l [782.122511ms] Apr 23 00:54:48.693: INFO: Got endpoints: latency-svc-466jm [761.09931ms] Apr 23 00:54:48.721: INFO: Created: latency-svc-h2dxp Apr 23 00:54:48.735: INFO: Got endpoints: latency-svc-h2dxp [753.035558ms] Apr 23 00:54:48.791: INFO: Created: latency-svc-cg85g Apr 23 00:54:48.806: INFO: Created: latency-svc-2lcss Apr 23 00:54:48.806: INFO: Got endpoints: latency-svc-cg85g [765.704458ms] Apr 23 00:54:48.819: INFO: Got endpoints: latency-svc-2lcss [716.36106ms] Apr 23 00:54:48.861: INFO: Created: latency-svc-k2hnf Apr 23 00:54:48.878: INFO: Got endpoints: latency-svc-k2hnf [742.00635ms] Apr 23 00:54:48.928: INFO: Created: latency-svc-xkqnr Apr 23 00:54:48.932: INFO: Got endpoints: latency-svc-xkqnr [760.996203ms] Apr 23 00:54:48.954: INFO: Created: latency-svc-kp4mh Apr 23 00:54:48.968: INFO: Got endpoints: latency-svc-kp4mh [740.617301ms] Apr 23 00:54:49.066: INFO: Created: latency-svc-tf8qk Apr 23 00:54:49.108: INFO: Got endpoints: latency-svc-tf8qk [846.546463ms] Apr 23 00:54:49.108: INFO: Created: latency-svc-mqs6d Apr 23 00:54:49.118: INFO: Got endpoints: latency-svc-mqs6d [826.638874ms] Apr 23 00:54:49.134: INFO: Created: latency-svc-8mbtl Apr 23 00:54:49.148: INFO: Got endpoints: latency-svc-8mbtl [752.139128ms] Apr 23 00:54:49.220: INFO: Created: latency-svc-jsk7c Apr 23 00:54:49.232: INFO: Got endpoints: latency-svc-jsk7c [791.371986ms] Apr 23 00:54:49.257: INFO: Created: latency-svc-tfp5p Apr 23 00:54:49.268: INFO: Got endpoints: latency-svc-tfp5p [779.717896ms] Apr 23 00:54:49.292: INFO: Created: latency-svc-p96pk Apr 23 00:54:49.335: INFO: Got endpoints: latency-svc-p96pk [779.728321ms] Apr 23 00:54:49.362: INFO: Created: latency-svc-ckhzs Apr 23 00:54:49.376: INFO: Got endpoints: latency-svc-ckhzs [767.733928ms] Apr 23 00:54:49.405: INFO: Created: latency-svc-w7blc Apr 23 00:54:49.434: INFO: Got endpoints: latency-svc-w7blc [98.740862ms] Apr 23 00:54:49.472: INFO: Created: latency-svc-r6cf8 Apr 23 00:54:49.484: INFO: Got endpoints: latency-svc-r6cf8 [805.942518ms] Apr 23 00:54:49.508: INFO: Created: latency-svc-p8lhc Apr 23 00:54:49.520: INFO: Got endpoints: latency-svc-p8lhc [826.668315ms] Apr 23 00:54:49.546: INFO: Created: latency-svc-ft9g2 Apr 23 00:54:49.593: INFO: Got endpoints: latency-svc-ft9g2 [858.198515ms] Apr 23 00:54:49.626: INFO: Created: latency-svc-6n8sd Apr 23 00:54:49.657: INFO: Got endpoints: latency-svc-6n8sd [851.156329ms] Apr 23 00:54:49.731: INFO: Created: latency-svc-q6btp Apr 23 00:54:49.754: INFO: Got endpoints: latency-svc-q6btp [935.146966ms] Apr 23 00:54:49.755: INFO: Created: latency-svc-9lbhx Apr 23 00:54:49.790: INFO: Got endpoints: latency-svc-9lbhx [912.082335ms] Apr 23 00:54:49.875: INFO: Created: latency-svc-p9x9k Apr 23 00:54:49.890: INFO: Got endpoints: latency-svc-p9x9k [958.642813ms] Apr 23 00:54:49.910: INFO: Created: latency-svc-9gffw Apr 23 00:54:49.926: INFO: Got endpoints: latency-svc-9gffw [958.154678ms] Apr 23 00:54:49.952: INFO: Created: latency-svc-5p867 Apr 23 00:54:49.968: INFO: Got endpoints: latency-svc-5p867 [860.582671ms] Apr 23 00:54:50.016: INFO: Created: latency-svc-m6bmb Apr 23 00:54:50.030: INFO: Got endpoints: latency-svc-m6bmb [912.139273ms] Apr 23 00:54:50.046: INFO: Created: latency-svc-96lkb Apr 23 00:54:50.059: INFO: Got endpoints: latency-svc-96lkb [911.645826ms] Apr 23 00:54:50.076: INFO: Created: latency-svc-wjpcd Apr 23 00:54:50.089: INFO: Got endpoints: latency-svc-wjpcd [857.276821ms] Apr 23 00:54:50.106: INFO: Created: latency-svc-lk8kf Apr 23 00:54:50.132: INFO: Got endpoints: latency-svc-lk8kf [863.716691ms] Apr 23 00:54:50.143: INFO: Created: latency-svc-qf7hx Apr 23 00:54:50.161: INFO: Got endpoints: latency-svc-qf7hx [785.369646ms] Apr 23 00:54:50.180: INFO: Created: latency-svc-mplq2 Apr 23 00:54:50.191: INFO: Got endpoints: latency-svc-mplq2 [757.144241ms] Apr 23 00:54:50.210: INFO: Created: latency-svc-ws9nx Apr 23 00:54:50.226: INFO: Got endpoints: latency-svc-ws9nx [741.754277ms] Apr 23 00:54:50.264: INFO: Created: latency-svc-jck78 Apr 23 00:54:50.268: INFO: Got endpoints: latency-svc-jck78 [747.425891ms] Apr 23 00:54:50.286: INFO: Created: latency-svc-vbpnw Apr 23 00:54:50.298: INFO: Got endpoints: latency-svc-vbpnw [704.44445ms] Apr 23 00:54:50.316: INFO: Created: latency-svc-zqd65 Apr 23 00:54:50.342: INFO: Got endpoints: latency-svc-zqd65 [684.691232ms] Apr 23 00:54:50.389: INFO: Created: latency-svc-l6rmv Apr 23 00:54:50.394: INFO: Got endpoints: latency-svc-l6rmv [639.600574ms] Apr 23 00:54:50.454: INFO: Created: latency-svc-xkhvv Apr 23 00:54:50.478: INFO: Got endpoints: latency-svc-xkhvv [687.308395ms] Apr 23 00:54:50.521: INFO: Created: latency-svc-zpv8g Apr 23 00:54:50.531: INFO: Got endpoints: latency-svc-zpv8g [641.19395ms] Apr 23 00:54:50.558: INFO: Created: latency-svc-n5rns Apr 23 00:54:50.575: INFO: Got endpoints: latency-svc-n5rns [648.407267ms] Apr 23 00:54:50.587: INFO: Created: latency-svc-9n22m Apr 23 00:54:50.598: INFO: Got endpoints: latency-svc-9n22m [630.08347ms] Apr 23 00:54:50.612: INFO: Created: latency-svc-ksmhd Apr 23 00:54:50.635: INFO: Got endpoints: latency-svc-ksmhd [604.743511ms] Apr 23 00:54:50.658: INFO: Created: latency-svc-vsmkg Apr 23 00:54:50.683: INFO: Got endpoints: latency-svc-vsmkg [623.245519ms] Apr 23 00:54:50.718: INFO: Created: latency-svc-pm594 Apr 23 00:54:50.730: INFO: Got endpoints: latency-svc-pm594 [640.887762ms] Apr 23 00:54:50.766: INFO: Created: latency-svc-tjlvc Apr 23 00:54:50.786: INFO: Got endpoints: latency-svc-tjlvc [653.593618ms] Apr 23 00:54:50.786: INFO: Created: latency-svc-wmrfm Apr 23 00:54:50.810: INFO: Got endpoints: latency-svc-wmrfm [648.347869ms] Apr 23 00:54:50.841: INFO: Created: latency-svc-7h9sq Apr 23 00:54:50.855: INFO: Got endpoints: latency-svc-7h9sq [663.773314ms] Apr 23 00:54:50.910: INFO: Created: latency-svc-dg5xq Apr 23 00:54:50.915: INFO: Got endpoints: latency-svc-dg5xq [688.553741ms] Apr 23 00:54:50.927: INFO: Created: latency-svc-gx76p Apr 23 00:54:50.939: INFO: Got endpoints: latency-svc-gx76p [671.517246ms] Apr 23 00:54:50.952: INFO: Created: latency-svc-xrdhp Apr 23 00:54:50.963: INFO: Got endpoints: latency-svc-xrdhp [665.033605ms] Apr 23 00:54:50.984: INFO: Created: latency-svc-h2cxm Apr 23 00:54:51.084: INFO: Got endpoints: latency-svc-h2cxm [742.227989ms] Apr 23 00:54:51.089: INFO: Created: latency-svc-5d6f2 Apr 23 00:54:51.101: INFO: Got endpoints: latency-svc-5d6f2 [706.698367ms] Apr 23 00:54:51.144: INFO: Created: latency-svc-xqs75 Apr 23 00:54:51.167: INFO: Got endpoints: latency-svc-xqs75 [688.966327ms] Apr 23 00:54:51.234: INFO: Created: latency-svc-jw45g Apr 23 00:54:51.254: INFO: Got endpoints: latency-svc-jw45g [722.282472ms] Apr 23 00:54:51.255: INFO: Created: latency-svc-lxslp Apr 23 00:54:51.263: INFO: Got endpoints: latency-svc-lxslp [688.357001ms] Apr 23 00:54:51.278: INFO: Created: latency-svc-m7szs Apr 23 00:54:51.287: INFO: Got endpoints: latency-svc-m7szs [688.962061ms] Apr 23 00:54:51.306: INFO: Created: latency-svc-qhlmk Apr 23 00:54:51.330: INFO: Got endpoints: latency-svc-qhlmk [695.459131ms] Apr 23 00:54:51.372: INFO: Created: latency-svc-9hkmb Apr 23 00:54:51.377: INFO: Got endpoints: latency-svc-9hkmb [694.71519ms] Apr 23 00:54:51.399: INFO: Created: latency-svc-gzlqs Apr 23 00:54:51.432: INFO: Got endpoints: latency-svc-gzlqs [701.167356ms] Apr 23 00:54:51.452: INFO: Created: latency-svc-dwsfz Apr 23 00:54:51.515: INFO: Got endpoints: latency-svc-dwsfz [729.408508ms] Apr 23 00:54:51.528: INFO: Created: latency-svc-nrf9l Apr 23 00:54:51.544: INFO: Got endpoints: latency-svc-nrf9l [734.006687ms] Apr 23 00:54:51.560: INFO: Created: latency-svc-rv976 Apr 23 00:54:51.568: INFO: Got endpoints: latency-svc-rv976 [712.691747ms] Apr 23 00:54:51.596: INFO: Created: latency-svc-lpk8m Apr 23 00:54:51.665: INFO: Got endpoints: latency-svc-lpk8m [750.080629ms] Apr 23 00:54:51.666: INFO: Created: latency-svc-ljxtk Apr 23 00:54:51.670: INFO: Got endpoints: latency-svc-ljxtk [730.316491ms] Apr 23 00:54:51.696: INFO: Created: latency-svc-psxpc Apr 23 00:54:51.712: INFO: Got endpoints: latency-svc-psxpc [749.068825ms] Apr 23 00:54:51.816: INFO: Created: latency-svc-x4k4k Apr 23 00:54:51.848: INFO: Got endpoints: latency-svc-x4k4k [763.468845ms] Apr 23 00:54:51.884: INFO: Created: latency-svc-wlvlv Apr 23 00:54:51.898: INFO: Got endpoints: latency-svc-wlvlv [796.933528ms] Apr 23 00:54:51.958: INFO: Created: latency-svc-vcgbn Apr 23 00:54:51.978: INFO: Created: latency-svc-96m9n Apr 23 00:54:51.978: INFO: Got endpoints: latency-svc-vcgbn [811.584786ms] Apr 23 00:54:52.008: INFO: Got endpoints: latency-svc-96m9n [753.871939ms] Apr 23 00:54:52.044: INFO: Created: latency-svc-wbh2n Apr 23 00:54:52.072: INFO: Got endpoints: latency-svc-wbh2n [808.54408ms] Apr 23 00:54:52.088: INFO: Created: latency-svc-l5gj4 Apr 23 00:54:52.097: INFO: Got endpoints: latency-svc-l5gj4 [809.232835ms] Apr 23 00:54:52.112: INFO: Created: latency-svc-8wqn5 Apr 23 00:54:52.126: INFO: Got endpoints: latency-svc-8wqn5 [796.271079ms] Apr 23 00:54:52.143: INFO: Created: latency-svc-jzqv7 Apr 23 00:54:52.151: INFO: Got endpoints: latency-svc-jzqv7 [773.178406ms] Apr 23 00:54:52.170: INFO: Created: latency-svc-5lvx7 Apr 23 00:54:52.228: INFO: Got endpoints: latency-svc-5lvx7 [796.194881ms] Apr 23 00:54:52.230: INFO: Created: latency-svc-mq757 Apr 23 00:54:52.242: INFO: Got endpoints: latency-svc-mq757 [726.417438ms] Apr 23 00:54:52.262: INFO: Created: latency-svc-ncsbs Apr 23 00:54:52.275: INFO: Got endpoints: latency-svc-ncsbs [730.87092ms] Apr 23 00:54:52.291: INFO: Created: latency-svc-6sqv8 Apr 23 00:54:52.305: INFO: Got endpoints: latency-svc-6sqv8 [737.230171ms] Apr 23 00:54:52.322: INFO: Created: latency-svc-sbtff Apr 23 00:54:52.362: INFO: Got endpoints: latency-svc-sbtff [697.493217ms] Apr 23 00:54:52.364: INFO: Created: latency-svc-krn8z Apr 23 00:54:52.377: INFO: Got endpoints: latency-svc-krn8z [706.973415ms] Apr 23 00:54:52.398: INFO: Created: latency-svc-jd2vh Apr 23 00:54:52.407: INFO: Got endpoints: latency-svc-jd2vh [694.78432ms] Apr 23 00:54:52.422: INFO: Created: latency-svc-ldccl Apr 23 00:54:52.444: INFO: Got endpoints: latency-svc-ldccl [595.744972ms] Apr 23 00:54:52.521: INFO: Created: latency-svc-9hkq9 Apr 23 00:54:52.542: INFO: Created: latency-svc-922rm Apr 23 00:54:52.542: INFO: Got endpoints: latency-svc-9hkq9 [644.196222ms] Apr 23 00:54:52.558: INFO: Got endpoints: latency-svc-922rm [579.513938ms] Apr 23 00:54:52.584: INFO: Created: latency-svc-bnn2x Apr 23 00:54:52.600: INFO: Got endpoints: latency-svc-bnn2x [591.973784ms] Apr 23 00:54:52.620: INFO: Created: latency-svc-n2mlv Apr 23 00:54:52.647: INFO: Got endpoints: latency-svc-n2mlv [574.975003ms] Apr 23 00:54:52.664: INFO: Created: latency-svc-zrhz4 Apr 23 00:54:52.678: INFO: Got endpoints: latency-svc-zrhz4 [580.859447ms] Apr 23 00:54:52.693: INFO: Created: latency-svc-sbs6z Apr 23 00:54:52.708: INFO: Got endpoints: latency-svc-sbs6z [581.196075ms] Apr 23 00:54:52.724: INFO: Created: latency-svc-sk8gx Apr 23 00:54:52.745: INFO: Got endpoints: latency-svc-sk8gx [594.890282ms] Apr 23 00:54:52.791: INFO: Created: latency-svc-8fkx2 Apr 23 00:54:52.797: INFO: Got endpoints: latency-svc-8fkx2 [569.029432ms] Apr 23 00:54:52.826: INFO: Created: latency-svc-mt97l Apr 23 00:54:52.838: INFO: Got endpoints: latency-svc-mt97l [596.715675ms] Apr 23 00:54:52.868: INFO: Created: latency-svc-mzp6p Apr 23 00:54:52.916: INFO: Got endpoints: latency-svc-mzp6p [641.56467ms] Apr 23 00:54:52.926: INFO: Created: latency-svc-fd2xz Apr 23 00:54:52.940: INFO: Got endpoints: latency-svc-fd2xz [635.307468ms] Apr 23 00:54:52.956: INFO: Created: latency-svc-85b7g Apr 23 00:54:52.964: INFO: Got endpoints: latency-svc-85b7g [601.585217ms] Apr 23 00:54:52.979: INFO: Created: latency-svc-c2s98 Apr 23 00:54:52.988: INFO: Got endpoints: latency-svc-c2s98 [611.333599ms] Apr 23 00:54:53.010: INFO: Created: latency-svc-2bkn7 Apr 23 00:54:53.036: INFO: Got endpoints: latency-svc-2bkn7 [628.976934ms] Apr 23 00:54:53.054: INFO: Created: latency-svc-4h64t Apr 23 00:54:53.067: INFO: Got endpoints: latency-svc-4h64t [623.145193ms] Apr 23 00:54:53.114: INFO: Created: latency-svc-sdz2t Apr 23 00:54:53.156: INFO: Got endpoints: latency-svc-sdz2t [613.715512ms] Apr 23 00:54:53.184: INFO: Created: latency-svc-zmnm5 Apr 23 00:54:53.199: INFO: Got endpoints: latency-svc-zmnm5 [641.199577ms] Apr 23 00:54:53.220: INFO: Created: latency-svc-6g5j6 Apr 23 00:54:53.235: INFO: Got endpoints: latency-svc-6g5j6 [635.407105ms] Apr 23 00:54:53.251: INFO: Created: latency-svc-hxdmd Apr 23 00:54:53.306: INFO: Got endpoints: latency-svc-hxdmd [659.011565ms] Apr 23 00:54:53.318: INFO: Created: latency-svc-4v2xz Apr 23 00:54:53.331: INFO: Got endpoints: latency-svc-4v2xz [653.203073ms] Apr 23 00:54:53.346: INFO: Created: latency-svc-mnw59 Apr 23 00:54:53.360: INFO: Got endpoints: latency-svc-mnw59 [652.29328ms] Apr 23 00:54:53.382: INFO: Created: latency-svc-l8skw Apr 23 00:54:53.456: INFO: Got endpoints: latency-svc-l8skw [710.219411ms] Apr 23 00:54:53.458: INFO: Created: latency-svc-srz57 Apr 23 00:54:53.467: INFO: Got endpoints: latency-svc-srz57 [670.396148ms] Apr 23 00:54:53.486: INFO: Created: latency-svc-z25gh Apr 23 00:54:53.502: INFO: Got endpoints: latency-svc-z25gh [663.306653ms] Apr 23 00:54:53.516: INFO: Created: latency-svc-jmt45 Apr 23 00:54:53.540: INFO: Got endpoints: latency-svc-jmt45 [623.739439ms] Apr 23 00:54:53.587: INFO: Created: latency-svc-7b2gf Apr 23 00:54:53.610: INFO: Got endpoints: latency-svc-7b2gf [669.327534ms] Apr 23 00:54:53.612: INFO: Created: latency-svc-xxnd4 Apr 23 00:54:53.630: INFO: Got endpoints: latency-svc-xxnd4 [665.589582ms] Apr 23 00:54:53.658: INFO: Created: latency-svc-c5dn8 Apr 23 00:54:53.672: INFO: Got endpoints: latency-svc-c5dn8 [684.108718ms] Apr 23 00:54:53.755: INFO: Created: latency-svc-kzkk5 Apr 23 00:54:53.774: INFO: Got endpoints: latency-svc-kzkk5 [737.756859ms] Apr 23 00:54:53.774: INFO: Created: latency-svc-xktmc Apr 23 00:54:53.786: INFO: Got endpoints: latency-svc-xktmc [719.291957ms] Apr 23 00:54:53.814: INFO: Created: latency-svc-2xvhb Apr 23 00:54:53.844: INFO: Got endpoints: latency-svc-2xvhb [688.095927ms] Apr 23 00:54:53.892: INFO: Created: latency-svc-hpfzr Apr 23 00:54:53.901: INFO: Got endpoints: latency-svc-hpfzr [701.576608ms] Apr 23 00:54:53.918: INFO: Created: latency-svc-jhrvw Apr 23 00:54:53.930: INFO: Got endpoints: latency-svc-jhrvw [694.971635ms] Apr 23 00:54:53.950: INFO: Created: latency-svc-pktk6 Apr 23 00:54:53.972: INFO: Got endpoints: latency-svc-pktk6 [665.931506ms] Apr 23 00:54:54.018: INFO: Created: latency-svc-nw5jm Apr 23 00:54:54.035: INFO: Got endpoints: latency-svc-nw5jm [704.523035ms] Apr 23 00:54:54.036: INFO: Created: latency-svc-n295q Apr 23 00:54:54.050: INFO: Got endpoints: latency-svc-n295q [689.572264ms] Apr 23 00:54:54.066: INFO: Created: latency-svc-85hsx Apr 23 00:54:54.092: INFO: Got endpoints: latency-svc-85hsx [636.217607ms] Apr 23 00:54:54.144: INFO: Created: latency-svc-vvxtg Apr 23 00:54:54.164: INFO: Got endpoints: latency-svc-vvxtg [696.559514ms] Apr 23 00:54:54.164: INFO: Created: latency-svc-frb77 Apr 23 00:54:54.181: INFO: Got endpoints: latency-svc-frb77 [679.173766ms] Apr 23 00:54:54.199: INFO: Created: latency-svc-4xmsk Apr 23 00:54:54.228: INFO: Got endpoints: latency-svc-4xmsk [687.27157ms] Apr 23 00:54:54.270: INFO: Created: latency-svc-h7rnv Apr 23 00:54:54.277: INFO: Got endpoints: latency-svc-h7rnv [666.782367ms] Apr 23 00:54:54.295: INFO: Created: latency-svc-scmcd Apr 23 00:54:54.307: INFO: Got endpoints: latency-svc-scmcd [677.590014ms] Apr 23 00:54:54.326: INFO: Created: latency-svc-mrlch Apr 23 00:54:54.350: INFO: Got endpoints: latency-svc-mrlch [677.749663ms] Apr 23 00:54:54.402: INFO: Created: latency-svc-8tr9s Apr 23 00:54:54.420: INFO: Got endpoints: latency-svc-8tr9s [646.138491ms] Apr 23 00:54:54.420: INFO: Created: latency-svc-ggn9g Apr 23 00:54:54.433: INFO: Got endpoints: latency-svc-ggn9g [647.231495ms] Apr 23 00:54:54.451: INFO: Created: latency-svc-2hx8t Apr 23 00:54:54.475: INFO: Got endpoints: latency-svc-2hx8t [630.879647ms] Apr 23 00:54:54.498: INFO: Created: latency-svc-xv48f Apr 23 00:54:54.551: INFO: Got endpoints: latency-svc-xv48f [650.285438ms] Apr 23 00:54:54.553: INFO: Created: latency-svc-sw45x Apr 23 00:54:54.559: INFO: Got endpoints: latency-svc-sw45x [628.592689ms] Apr 23 00:54:54.583: INFO: Created: latency-svc-8w8mb Apr 23 00:54:54.600: INFO: Got endpoints: latency-svc-8w8mb [627.948092ms] Apr 23 00:54:54.618: INFO: Created: latency-svc-nflbd Apr 23 00:54:54.630: INFO: Got endpoints: latency-svc-nflbd [594.790352ms] Apr 23 00:54:54.648: INFO: Created: latency-svc-df296 Apr 23 00:54:54.689: INFO: Got endpoints: latency-svc-df296 [639.104458ms] Apr 23 00:54:54.704: INFO: Created: latency-svc-vb2mx Apr 23 00:54:54.720: INFO: Got endpoints: latency-svc-vb2mx [627.618188ms] Apr 23 00:54:54.740: INFO: Created: latency-svc-jm9kc Apr 23 00:54:54.756: INFO: Got endpoints: latency-svc-jm9kc [591.88812ms] Apr 23 00:54:54.777: INFO: Created: latency-svc-jqtsd Apr 23 00:54:54.827: INFO: Got endpoints: latency-svc-jqtsd [645.827465ms] Apr 23 00:54:54.828: INFO: Created: latency-svc-hz8wq Apr 23 00:54:54.833: INFO: Got endpoints: latency-svc-hz8wq [605.802584ms] Apr 23 00:54:54.851: INFO: Created: latency-svc-lhxmw Apr 23 00:54:54.864: INFO: Got endpoints: latency-svc-lhxmw [587.692848ms] Apr 23 00:54:54.882: INFO: Created: latency-svc-4vstj Apr 23 00:54:54.895: INFO: Got endpoints: latency-svc-4vstj [587.142342ms] Apr 23 00:54:54.912: INFO: Created: latency-svc-rw5bv Apr 23 00:54:54.924: INFO: Got endpoints: latency-svc-rw5bv [574.29337ms] Apr 23 00:54:54.964: INFO: Created: latency-svc-8ldh7 Apr 23 00:54:54.980: INFO: Got endpoints: latency-svc-8ldh7 [559.808424ms] Apr 23 00:54:54.998: INFO: Created: latency-svc-qp8rc Apr 23 00:54:55.008: INFO: Got endpoints: latency-svc-qp8rc [574.691584ms] Apr 23 00:54:55.026: INFO: Created: latency-svc-q9kml Apr 23 00:54:55.044: INFO: Got endpoints: latency-svc-q9kml [569.713541ms] Apr 23 00:54:55.062: INFO: Created: latency-svc-5kh2m Apr 23 00:54:55.126: INFO: Got endpoints: latency-svc-5kh2m [574.601204ms] Apr 23 00:54:55.126: INFO: Latencies: [48.858601ms 79.357649ms 98.740862ms 167.309865ms 212.030257ms 254.204893ms 321.150246ms 339.143833ms 373.453527ms 457.150098ms 467.995353ms 543.910431ms 555.632381ms 559.808424ms 569.029432ms 569.713541ms 570.06854ms 574.29337ms 574.601204ms 574.691584ms 574.975003ms 579.513938ms 579.77028ms 580.859447ms 581.196075ms 583.130029ms 587.142342ms 587.177246ms 587.190466ms 587.692848ms 591.88812ms 591.973784ms 594.790352ms 594.890282ms 595.744972ms 596.35963ms 596.715675ms 601.585217ms 604.743511ms 605.33607ms 605.802584ms 611.333599ms 613.715512ms 613.981303ms 620.602131ms 622.906703ms 623.145193ms 623.245519ms 623.437853ms 623.585781ms 623.722749ms 623.739439ms 627.618188ms 627.948092ms 628.592689ms 628.976934ms 630.08347ms 630.879647ms 635.307468ms 635.407105ms 636.217607ms 639.104458ms 639.479679ms 639.600574ms 639.793735ms 640.887762ms 641.19395ms 641.199577ms 641.56467ms 644.196222ms 645.087324ms 645.827465ms 646.138491ms 647.231495ms 648.347869ms 648.407267ms 650.285438ms 652.29328ms 653.203073ms 653.593618ms 659.011565ms 663.306653ms 663.773314ms 665.033605ms 665.589582ms 665.931506ms 666.782367ms 669.327534ms 670.198407ms 670.396148ms 671.517246ms 677.590014ms 677.749663ms 677.974575ms 679.173766ms 682.616736ms 682.629729ms 684.108718ms 684.691232ms 684.764237ms 687.27157ms 687.308395ms 688.095927ms 688.357001ms 688.553741ms 688.962061ms 688.966327ms 689.488961ms 689.572264ms 690.909195ms 694.71519ms 694.78432ms 694.971635ms 695.459131ms 696.559514ms 697.493217ms 701.167356ms 701.576608ms 704.44445ms 704.523035ms 706.698367ms 706.973415ms 710.219411ms 712.691747ms 716.36106ms 719.291957ms 721.847305ms 722.282472ms 726.417438ms 729.408508ms 730.316491ms 730.87092ms 731.759376ms 731.841512ms 734.006687ms 736.45342ms 736.507034ms 737.230171ms 737.756859ms 740.617301ms 741.754277ms 742.00635ms 742.227989ms 747.425891ms 749.068825ms 750.080629ms 751.439345ms 752.139128ms 753.035558ms 753.871939ms 754.718893ms 755.373798ms 757.144241ms 760.996203ms 761.09931ms 761.181505ms 762.516777ms 763.468845ms 765.704458ms 767.733928ms 770.536969ms 772.692622ms 772.936998ms 773.178406ms 775.452261ms 777.150594ms 778.914003ms 779.717896ms 779.728321ms 780.460625ms 782.122511ms 785.045137ms 785.369646ms 785.826799ms 791.371986ms 796.194881ms 796.271079ms 796.668671ms 796.933528ms 802.772635ms 805.942518ms 808.54408ms 809.163691ms 809.232835ms 809.963873ms 811.584786ms 826.638874ms 826.668315ms 846.546463ms 851.156329ms 857.276821ms 858.198515ms 860.582671ms 863.716691ms 911.645826ms 912.082335ms 912.139273ms 935.146966ms 958.154678ms 958.642813ms] Apr 23 00:54:55.126: INFO: 50 %ile: 687.27157ms Apr 23 00:54:55.126: INFO: 90 %ile: 805.942518ms Apr 23 00:54:55.126: INFO: 99 %ile: 958.154678ms Apr 23 00:54:55.126: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:54:55.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3343" for this suite. • [SLOW TEST:12.726 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":254,"skipped":4265,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:54:55.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:55:01.410: INFO: DNS probes using dns-7554/dns-test-d728559d-eef1-4838-9c09-2bd445f1add7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:55:01.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7554" for this suite. • [SLOW TEST:6.424 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":255,"skipped":4272,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:55:01.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:55:02.093: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d" in namespace "security-context-test-8325" to be "Succeeded or Failed" Apr 23 00:55:02.122: INFO: Pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.271224ms Apr 23 00:55:04.162: INFO: Pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069162267s Apr 23 00:55:06.210: INFO: Pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116680158s Apr 23 00:55:08.213: INFO: Pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119718705s Apr 23 00:55:08.213: INFO: Pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d" satisfied condition "Succeeded or Failed" Apr 23 00:55:08.237: INFO: Got logs for pod "busybox-privileged-false-850dc551-3f26-4ed9-95eb-e489b6d1bc7d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:55:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8325" for this suite. • [SLOW TEST:6.724 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:55:08.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:55:12.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7527" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4341,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:55:12.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:55:13.574: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:55:15.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200113, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200113, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200113, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200113, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:55:18.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 23 00:55:19.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 23 00:55:20.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 23 00:55:21.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 23 00:55:22.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:55:22.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9580" for this suite. STEP: Destroying namespace "webhook-9580-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.178 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":258,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:55:22.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 23 00:55:22.854: INFO: Waiting up to 5m0s for pod "pod-284fbd1c-086f-4318-a7cb-084b69fb8723" in namespace "emptydir-3978" to be "Succeeded or Failed" Apr 23 00:55:22.858: INFO: Pod "pod-284fbd1c-086f-4318-a7cb-084b69fb8723": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169822ms Apr 23 00:55:24.862: INFO: Pod "pod-284fbd1c-086f-4318-a7cb-084b69fb8723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008197346s Apr 23 00:55:26.867: INFO: Pod "pod-284fbd1c-086f-4318-a7cb-084b69fb8723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012417517s STEP: Saw pod success Apr 23 00:55:26.867: INFO: Pod "pod-284fbd1c-086f-4318-a7cb-084b69fb8723" satisfied condition "Succeeded or Failed" Apr 23 00:55:26.870: INFO: Trying to get logs from node latest-worker2 pod pod-284fbd1c-086f-4318-a7cb-084b69fb8723 container test-container: STEP: delete the pod Apr 23 00:55:26.890: INFO: Waiting for pod pod-284fbd1c-086f-4318-a7cb-084b69fb8723 to disappear Apr 23 00:55:26.911: INFO: Pod pod-284fbd1c-086f-4318-a7cb-084b69fb8723 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:55:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3978" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4365,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:55:26.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-02db4830-7e1c-4fa8-b602-4730f10c6a1c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-02db4830-7e1c-4fa8-b602-4730f10c6a1c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:56:39.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9436" for this suite. • [SLOW TEST:72.438 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4381,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:56:39.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6562 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6562 STEP: creating replication controller externalsvc in namespace services-6562 I0423 00:56:39.506626 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6562, replica count: 2 I0423 00:56:42.557102 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 00:56:45.557515 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 23 00:56:45.618: INFO: Creating new exec pod Apr 23 00:56:49.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6562 execpodcdqhh -- /bin/sh -x -c nslookup clusterip-service' Apr 23 00:56:49.878: INFO: stderr: "I0423 00:56:49.766579 3884 log.go:172] (0xc000b37d90) (0xc000aead20) Create stream\nI0423 00:56:49.766646 3884 log.go:172] (0xc000b37d90) (0xc000aead20) Stream added, broadcasting: 1\nI0423 00:56:49.771298 3884 log.go:172] (0xc000b37d90) Reply frame received for 1\nI0423 00:56:49.771337 3884 log.go:172] (0xc000b37d90) (0xc0005f7720) Create stream\nI0423 00:56:49.771346 3884 log.go:172] (0xc000b37d90) (0xc0005f7720) Stream added, broadcasting: 3\nI0423 00:56:49.772431 3884 log.go:172] (0xc000b37d90) Reply frame received for 3\nI0423 00:56:49.772485 3884 log.go:172] (0xc000b37d90) (0xc000404b40) Create stream\nI0423 00:56:49.772500 3884 log.go:172] (0xc000b37d90) (0xc000404b40) Stream added, broadcasting: 5\nI0423 00:56:49.773583 3884 log.go:172] (0xc000b37d90) Reply frame received for 5\nI0423 00:56:49.859054 3884 log.go:172] (0xc000b37d90) Data frame received for 5\nI0423 00:56:49.859075 3884 log.go:172] (0xc000404b40) (5) Data frame handling\nI0423 00:56:49.859088 3884 log.go:172] (0xc000404b40) (5) Data frame sent\n+ nslookup clusterip-service\nI0423 00:56:49.869358 3884 log.go:172] (0xc000b37d90) Data frame received for 3\nI0423 00:56:49.869387 3884 log.go:172] (0xc0005f7720) (3) Data frame handling\nI0423 00:56:49.869405 3884 log.go:172] (0xc0005f7720) (3) Data frame sent\nI0423 00:56:49.870240 3884 log.go:172] (0xc000b37d90) Data frame received for 3\nI0423 00:56:49.870256 3884 log.go:172] (0xc0005f7720) (3) Data frame handling\nI0423 00:56:49.870269 3884 log.go:172] (0xc0005f7720) (3) Data frame sent\nI0423 00:56:49.870677 3884 log.go:172] (0xc000b37d90) Data frame received for 5\nI0423 00:56:49.870703 3884 log.go:172] (0xc000404b40) (5) Data frame handling\nI0423 00:56:49.870853 3884 log.go:172] (0xc000b37d90) Data frame received for 3\nI0423 00:56:49.870872 3884 log.go:172] (0xc0005f7720) (3) Data frame handling\nI0423 00:56:49.872032 3884 log.go:172] (0xc000b37d90) Data frame received for 1\nI0423 00:56:49.872097 3884 log.go:172] (0xc000aead20) (1) Data frame handling\nI0423 00:56:49.872130 3884 log.go:172] (0xc000aead20) (1) Data frame sent\nI0423 00:56:49.872164 3884 log.go:172] (0xc000b37d90) (0xc000aead20) Stream removed, broadcasting: 1\nI0423 00:56:49.872180 3884 log.go:172] (0xc000b37d90) Go away received\nI0423 00:56:49.872595 3884 log.go:172] (0xc000b37d90) (0xc000aead20) Stream removed, broadcasting: 1\nI0423 00:56:49.872618 3884 log.go:172] (0xc000b37d90) (0xc0005f7720) Stream removed, broadcasting: 3\nI0423 00:56:49.872631 3884 log.go:172] (0xc000b37d90) (0xc000404b40) Stream removed, broadcasting: 5\n" Apr 23 00:56:49.878: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6562.svc.cluster.local\tcanonical name = externalsvc.services-6562.svc.cluster.local.\nName:\texternalsvc.services-6562.svc.cluster.local\nAddress: 10.96.15.202\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6562, will wait for the garbage collector to delete the pods Apr 23 00:56:49.938: INFO: Deleting ReplicationController externalsvc took: 6.765165ms Apr 23 00:56:50.238: INFO: Terminating ReplicationController externalsvc pods took: 300.212725ms Apr 23 00:57:03.091: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:57:03.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6562" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.774 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":261,"skipped":4395,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:57:03.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:57:03.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:57:05.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200223, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200223, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200223, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200223, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:57:09.023: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:57:09.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5120-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:57:10.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5902" for this suite. STEP: Destroying namespace "webhook-5902-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":262,"skipped":4397,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:57:10.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:57:10.409: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 23 00:57:10.429: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 23 00:57:15.436: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 00:57:15.436: INFO: Creating deployment "test-rolling-update-deployment" Apr 23 00:57:15.454: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 23 00:57:15.478: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 23 00:57:17.483: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 23 00:57:17.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200235, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200235, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200235, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200235, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 00:57:19.489: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 23 00:57:19.499: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6762 /apis/apps/v1/namespaces/deployment-6762/deployments/test-rolling-update-deployment 7667cba3-a378-4fb9-99a2-9f45f02a3b72 10273258 1 2020-04-23 00:57:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034b8ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-23 00:57:15 +0000 UTC,LastTransitionTime:2020-04-23 00:57:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-23 00:57:18 +0000 UTC,LastTransitionTime:2020-04-23 00:57:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 23 00:57:19.502: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-6762 /apis/apps/v1/namespaces/deployment-6762/replicasets/test-rolling-update-deployment-664dd8fc7f a1291e3b-f63b-400a-ba0c-32edfe8af9bc 10273247 1 2020-04-23 00:57:15 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7667cba3-a378-4fb9-99a2-9f45f02a3b72 0xc003623217 0xc003623218}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003623288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:57:19.502: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 23 00:57:19.502: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6762 /apis/apps/v1/namespaces/deployment-6762/replicasets/test-rolling-update-controller f6213b80-7fca-4761-b8db-6c3a7d260727 10273256 2 2020-04-23 00:57:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7667cba3-a378-4fb9-99a2-9f45f02a3b72 0xc003623147 0xc003623148}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0036231a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 23 00:57:19.505: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-ddfwz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-ddfwz test-rolling-update-deployment-664dd8fc7f- deployment-6762 /api/v1/namespaces/deployment-6762/pods/test-rolling-update-deployment-664dd8fc7f-ddfwz e1daed82-e44b-42b9-b759-7a7e6ad95850 10273246 0 2020-04-23 00:57:15 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f a1291e3b-f63b-400a-ba0c-32edfe8af9bc 0xc003623757 0xc003623758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kslsn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kslsn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kslsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:57:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:57:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:57:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-23 00:57:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.33,StartTime:2020-04-23 00:57:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-23 00:57:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b23c5cd86199ccc9e37305038643c980f635812e9464b37734e4ae0a12925062,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:57:19.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6762" for this suite. • [SLOW TEST:9.159 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":263,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:57:19.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-710e2cca-b288-4987-b8e7-24d034e19d47 STEP: Creating a pod to test consume configMaps Apr 23 00:57:19.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf" in namespace "configmap-9622" to be "Succeeded or Failed" Apr 23 00:57:19.602: INFO: Pod "pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.361896ms Apr 23 00:57:21.628: INFO: Pod "pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040094048s Apr 23 00:57:23.632: INFO: Pod "pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044394517s STEP: Saw pod success Apr 23 00:57:23.632: INFO: Pod "pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf" satisfied condition "Succeeded or Failed" Apr 23 00:57:23.635: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf container configmap-volume-test: STEP: delete the pod Apr 23 00:57:23.684: INFO: Waiting for pod pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf to disappear Apr 23 00:57:23.699: INFO: Pod pod-configmaps-77bd2c56-8ebe-4580-9b74-1b02c4d957bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:57:23.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9622" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:57:23.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:57:31.951: INFO: DNS probes using dns-test-adcb3656-20bf-4488-8bef-b3d9254c90d9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:57:38.051: INFO: File wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:38.054: INFO: File jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:38.054: INFO: Lookups using dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 failed for: [wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local] Apr 23 00:57:43.123: INFO: File wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:43.127: INFO: File jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:43.127: INFO: Lookups using dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 failed for: [wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local] Apr 23 00:57:48.059: INFO: File wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:48.063: INFO: File jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:48.063: INFO: Lookups using dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 failed for: [wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local] Apr 23 00:57:53.059: INFO: File wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:53.062: INFO: File jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local from pod dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 00:57:53.062: INFO: Lookups using dns-3568/dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 failed for: [wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local] Apr 23 00:57:58.063: INFO: DNS probes using dns-test-2e0c0ab8-3d03-4f56-9fed-e4145d334b86 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3568.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3568.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 00:58:04.577: INFO: DNS probes using dns-test-b932310d-e11d-4344-b86e-63c7b7e2b1eb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:04.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3568" for this suite. • [SLOW TEST:40.960 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":265,"skipped":4481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:04.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-575978ec-6bce-44d4-a9cf-85955131912a STEP: Creating a pod to test consume secrets Apr 23 00:58:04.728: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db" in namespace "projected-7175" to be "Succeeded or Failed" Apr 23 00:58:04.829: INFO: Pod "pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db": Phase="Pending", Reason="", readiness=false. Elapsed: 101.244387ms Apr 23 00:58:06.834: INFO: Pod "pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105478883s Apr 23 00:58:08.838: INFO: Pod "pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109936985s STEP: Saw pod success Apr 23 00:58:08.838: INFO: Pod "pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db" satisfied condition "Succeeded or Failed" Apr 23 00:58:08.842: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db container secret-volume-test: STEP: delete the pod Apr 23 00:58:08.872: INFO: Waiting for pod pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db to disappear Apr 23 00:58:08.895: INFO: Pod pod-projected-secrets-7ac153f4-ee3c-49a3-9d38-ead734f1d9db no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:08.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7175" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4537,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:08.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 23 00:58:08.935: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6088" for this suite. • [SLOW TEST:5.795 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":267,"skipped":4549,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:14.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 23 00:58:14.741: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:21.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1253" for this suite. • [SLOW TEST:6.334 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":268,"skipped":4555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:21.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:58:21.653: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:58:23.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200301, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200301, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:58:26.722: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:26.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-11" for this suite. STEP: Destroying namespace "webhook-11-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.922 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":269,"skipped":4584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:26.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-4cde4b96-e760-4ca8-895e-5e182bbcaf4b STEP: Creating a pod to test consume secrets Apr 23 00:58:27.020: INFO: Waiting up to 5m0s for pod "pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8" in namespace "secrets-4330" to be "Succeeded or Failed" Apr 23 00:58:27.039: INFO: Pod "pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.295537ms Apr 23 00:58:29.043: INFO: Pod "pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022132418s Apr 23 00:58:31.047: INFO: Pod "pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026314286s STEP: Saw pod success Apr 23 00:58:31.047: INFO: Pod "pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8" satisfied condition "Succeeded or Failed" Apr 23 00:58:31.050: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8 container secret-volume-test: STEP: delete the pod Apr 23 00:58:31.092: INFO: Waiting for pod pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8 to disappear Apr 23 00:58:31.116: INFO: Pod pod-secrets-fab1654b-a256-4bc7-a8bf-ce5e1f6d68e8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:31.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4330" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4623,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:31.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 23 00:58:31.756: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 23 00:58:33.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200311, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200311, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200311, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723200311, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 23 00:58:36.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:36.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9900" for this suite. STEP: Destroying namespace "webhook-9900-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.860 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":271,"skipped":4639,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:36.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-211659fd-45a7-4554-b4bf-1071fcec4e19 STEP: Creating a pod to test consume secrets Apr 23 00:58:37.085: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37" in namespace "projected-6151" to be "Succeeded or Failed" Apr 23 00:58:37.097: INFO: Pod "pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37": Phase="Pending", Reason="", readiness=false. Elapsed: 12.084858ms Apr 23 00:58:39.101: INFO: Pod "pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015783898s Apr 23 00:58:41.106: INFO: Pod "pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020222086s STEP: Saw pod success Apr 23 00:58:41.106: INFO: Pod "pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37" satisfied condition "Succeeded or Failed" Apr 23 00:58:41.108: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37 container projected-secret-volume-test: STEP: delete the pod Apr 23 00:58:41.160: INFO: Waiting for pod pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37 to disappear Apr 23 00:58:41.199: INFO: Pod pod-projected-secrets-d2834e85-f9ed-4441-968d-bcda8063bf37 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:41.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6151" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4641,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:41.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 23 00:58:41.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361" in namespace "downward-api-484" to be "Succeeded or Failed" Apr 23 00:58:41.466: INFO: Pod "downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361": Phase="Pending", Reason="", readiness=false. Elapsed: 46.779782ms Apr 23 00:58:43.469: INFO: Pod "downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050341038s Apr 23 00:58:45.474: INFO: Pod "downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054899855s STEP: Saw pod success Apr 23 00:58:45.474: INFO: Pod "downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361" satisfied condition "Succeeded or Failed" Apr 23 00:58:45.477: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361 container client-container: STEP: delete the pod Apr 23 00:58:45.499: INFO: Waiting for pod downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361 to disappear Apr 23 00:58:45.522: INFO: Pod downwardapi-volume-9221ae1e-9234-45b9-84c5-bdd73e5ec361 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:45.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-484" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4649,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:45.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-f92f8899-f0b7-40b5-bb0e-96a9f7d63b2a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2816" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":274,"skipped":4654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 23 00:58:45.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 23 00:58:45.693: INFO: Waiting up to 5m0s for pod "pod-d89f464b-48e2-4f16-94a6-8551c98a7edd" in namespace "emptydir-60" to be "Succeeded or Failed" Apr 23 00:58:45.734: INFO: Pod "pod-d89f464b-48e2-4f16-94a6-8551c98a7edd": Phase="Pending", Reason="", readiness=false. Elapsed: 41.150955ms Apr 23 00:58:47.738: INFO: Pod "pod-d89f464b-48e2-4f16-94a6-8551c98a7edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044774367s Apr 23 00:58:49.758: INFO: Pod "pod-d89f464b-48e2-4f16-94a6-8551c98a7edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065022535s STEP: Saw pod success Apr 23 00:58:49.758: INFO: Pod "pod-d89f464b-48e2-4f16-94a6-8551c98a7edd" satisfied condition "Succeeded or Failed" Apr 23 00:58:49.760: INFO: Trying to get logs from node latest-worker pod pod-d89f464b-48e2-4f16-94a6-8551c98a7edd container test-container: STEP: delete the pod Apr 23 00:58:49.791: INFO: Waiting for pod pod-d89f464b-48e2-4f16-94a6-8551c98a7edd to disappear Apr 23 00:58:49.804: INFO: Pod pod-d89f464b-48e2-4f16-94a6-8551c98a7edd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 23 00:58:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-60" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 23 00:58:49.812: INFO: Running AfterSuite actions on all nodes Apr 23 00:58:49.812: INFO: Running AfterSuite actions on node 1 Apr 23 00:58:49.812: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4875.434 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS